Test Report: QEMU_macOS 19576

                    
                      2e9b50ac88536491e648f1503809a6b59d99d481:2024-09-06:36104
                    
                

Test fail (96/270)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.91
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.05
33 TestAddons/parallel/Registry 71.3
46 TestCertOptions 10.24
47 TestCertExpiration 197.29
48 TestDockerFlags 12.2
49 TestForceSystemdFlag 10.16
50 TestForceSystemdEnv 10.1
95 TestFunctional/parallel/ServiceCmdConnect 30.34
111 TestFunctional/parallel/License 0.15
167 TestMultiControlPlane/serial/StopSecondaryNode 312.31
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.14
169 TestMultiControlPlane/serial/RestartSecondaryNode 305.25
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.57
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 231.68
177 TestImageBuild/serial/Setup 10.07
180 TestJSONOutput/start/Command 9.86
186 TestJSONOutput/pause/Command 0.08
192 TestJSONOutput/unpause/Command 0.04
209 TestMinikubeProfile 10.14
212 TestMountStart/serial/StartWithMountFirst 9.95
215 TestMultiNode/serial/FreshStart2Nodes 9.96
216 TestMultiNode/serial/DeployApp2Nodes 98.25
217 TestMultiNode/serial/PingHostFrom2Pods 0.09
218 TestMultiNode/serial/AddNode 0.08
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.08
221 TestMultiNode/serial/CopyFile 0.06
222 TestMultiNode/serial/StopNode 0.14
223 TestMultiNode/serial/StartAfterStop 47.62
224 TestMultiNode/serial/RestartKeepsNodes 8.72
225 TestMultiNode/serial/DeleteNode 0.1
226 TestMultiNode/serial/StopMultiNode 3.28
227 TestMultiNode/serial/RestartMultiNode 5.26
228 TestMultiNode/serial/ValidateNameConflict 19.9
232 TestPreload 9.95
234 TestScheduledStopUnix 10.23
235 TestSkaffold 12.7
238 TestRunningBinaryUpgrade 601.23
240 TestKubernetesUpgrade 17.7
254 TestStoppedBinaryUpgrade/Upgrade 611.73
263 TestPause/serial/Start 9.92
267 TestNoKubernetes/serial/StartWithK8s 9.96
268 TestNoKubernetes/serial/StartWithStopK8s 5.31
269 TestNoKubernetes/serial/Start 5.31
273 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.99
274 TestNoKubernetes/serial/StartNoArgs 5.37
275 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.61
277 TestNetworkPlugins/group/auto/Start 10.4
278 TestNetworkPlugins/group/kindnet/Start 9.83
279 TestNetworkPlugins/group/calico/Start 9.76
280 TestNetworkPlugins/group/custom-flannel/Start 9.88
281 TestNetworkPlugins/group/false/Start 9.84
282 TestNetworkPlugins/group/enable-default-cni/Start 9.75
283 TestNetworkPlugins/group/flannel/Start 10.12
284 TestNetworkPlugins/group/bridge/Start 9.79
285 TestNetworkPlugins/group/kubenet/Start 9.89
287 TestStartStop/group/old-k8s-version/serial/FirstStart 10.07
288 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
289 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
292 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
293 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
295 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
296 TestStartStop/group/old-k8s-version/serial/Pause 0.1
298 TestStartStop/group/no-preload/serial/FirstStart 9.93
299 TestStartStop/group/no-preload/serial/DeployApp 0.09
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
303 TestStartStop/group/no-preload/serial/SecondStart 5.26
304 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
305 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
306 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
307 TestStartStop/group/no-preload/serial/Pause 0.1
309 TestStartStop/group/embed-certs/serial/FirstStart 10.01
310 TestStartStop/group/embed-certs/serial/DeployApp 0.09
311 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
314 TestStartStop/group/embed-certs/serial/SecondStart 5.21
315 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
317 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
318 TestStartStop/group/embed-certs/serial/Pause 0.1
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.05
322 TestStartStop/group/newest-cni/serial/FirstStart 9.92
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
327 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.78
332 TestStartStop/group/newest-cni/serial/SecondStart 5.25
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
340 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (16.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-666000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-666000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (16.905416708s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4deaf8b9-8f63-488e-b386-2ae1056feb10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-666000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"02fe49a3-28b5-41fc-85e7-7891dd15a0f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19576"}}
	{"specversion":"1.0","id":"c1dfd2ba-0b1c-4432-b3a3-c9155401488a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig"}}
	{"specversion":"1.0","id":"6f187f6b-16ff-46d1-bba5-a81d7f590b20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b916a054-4680-413a-b956-9fe7890123f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"598e1e2c-3ef6-49bb-8896-ceebe2eabac2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube"}}
	{"specversion":"1.0","id":"3283ac17-b425-4412-ba6f-0a84c63709c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"fc3332f6-5be5-4e30-a17b-a4781e87ff8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0d2ac85-1cc8-44cb-95c1-39ad9ad2fef2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"afde2bdc-9bd5-439f-83f1-9401c78d24ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7aab37f3-3552-45dd-82a8-aad508425a16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-666000\" primary control-plane node in \"download-only-666000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"68e3ba2d-66d1-48a9-a8ab-96d401968300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4972e1d-b8b2-4985-bf1e-14b5dea13269","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10582f960 0x10582f960 0x10582f960 0x10582f960 0x10582f960 0x10582f960 0x10582f960] Decompressors:map[bz2:0x140005e1dd0 gz:0x140005e1dd8 tar:0x140005e1d80 tar.bz2:0x140005e1d90 tar.gz:0x140005e1da0 tar.xz:0x140005e1db0 tar.zst:0x140005e1dc0 tbz2:0x140005e1d90 tgz:0x14
0005e1da0 txz:0x140005e1db0 tzst:0x140005e1dc0 xz:0x140005e1de0 zip:0x140005e1df0 zst:0x140005e1de8] Getters:map[file:0x140005e2600 http:0x1400056c190 https:0x1400056c1e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"cf46f8b2-dff1-4933-9d0f-926a5ddbdc06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 11:28:35.476896    2674 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:28:35.477015    2674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:28:35.477019    2674 out.go:358] Setting ErrFile to fd 2...
	I0906 11:28:35.477021    2674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:28:35.477143    2674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	W0906 11:28:35.477205    2674 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19576-2143/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19576-2143/.minikube/config/config.json: no such file or directory
	I0906 11:28:35.478548    2674 out.go:352] Setting JSON to true
	I0906 11:28:35.495999    2674 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1685,"bootTime":1725645630,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 11:28:35.496067    2674 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:28:35.501510    2674 out.go:97] [download-only-666000] minikube v1.34.0 on Darwin 14.5 (arm64)
	W0906 11:28:35.501697    2674 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 11:28:35.501706    2674 notify.go:220] Checking for updates...
	I0906 11:28:35.504481    2674 out.go:169] MINIKUBE_LOCATION=19576
	I0906 11:28:35.507407    2674 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 11:28:35.511495    2674 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 11:28:35.514523    2674 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:28:35.517428    2674 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	W0906 11:28:35.523472    2674 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 11:28:35.523681    2674 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:28:35.527456    2674 out.go:97] Using the qemu2 driver based on user configuration
	I0906 11:28:35.527473    2674 start.go:297] selected driver: qemu2
	I0906 11:28:35.527487    2674 start.go:901] validating driver "qemu2" against <nil>
	I0906 11:28:35.527556    2674 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 11:28:35.531482    2674 out.go:169] Automatically selected the socket_vmnet network
	I0906 11:28:35.537192    2674 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0906 11:28:35.537270    2674 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 11:28:35.537301    2674 cni.go:84] Creating CNI manager for ""
	I0906 11:28:35.537317    2674 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 11:28:35.537381    2674 start.go:340] cluster config:
	{Name:download-only-666000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0906 11:28:35.542964    2674 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:28:35.547446    2674 out.go:97] Downloading VM boot image ...
	I0906 11:28:35.547463    2674 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso
	I0906 11:28:40.745719    2674 out.go:97] Starting "download-only-666000" primary control-plane node in "download-only-666000" cluster
	I0906 11:28:40.745754    2674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 11:28:40.810034    2674 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0906 11:28:40.810061    2674 cache.go:56] Caching tarball of preloaded images
	I0906 11:28:40.810264    2674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 11:28:40.815333    2674 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0906 11:28:40.815340    2674 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 11:28:40.916000    2674 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0906 11:28:51.066846    2674 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 11:28:51.067032    2674 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 11:28:51.763516    2674 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0906 11:28:51.763717    2674 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/download-only-666000/config.json ...
	I0906 11:28:51.763748    2674 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/download-only-666000/config.json: {Name:mkac9a06d5758b5208f0be2aba6ce4f44041b623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:28:51.763997    2674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 11:28:51.764170    2674 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0906 11:28:52.311420    2674 out.go:193] 
	W0906 11:28:52.317558    2674 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10582f960 0x10582f960 0x10582f960 0x10582f960 0x10582f960 0x10582f960 0x10582f960] Decompressors:map[bz2:0x140005e1dd0 gz:0x140005e1dd8 tar:0x140005e1d80 tar.bz2:0x140005e1d90 tar.gz:0x140005e1da0 tar.xz:0x140005e1db0 tar.zst:0x140005e1dc0 tbz2:0x140005e1d90 tgz:0x140005e1da0 txz:0x140005e1db0 tzst:0x140005e1dc0 xz:0x140005e1de0 zip:0x140005e1df0 zst:0x140005e1de8] Getters:map[file:0x140005e2600 http:0x1400056c190 https:0x1400056c1e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0906 11:28:52.317586    2674 out_reason.go:110] 
	W0906 11:28:52.325426    2674 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 11:28:52.328451    2674 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-666000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (16.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-868000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-868000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.886233041s)

                                                
                                                
-- stdout --
	* [offline-docker-868000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-868000" primary control-plane node in "offline-docker-868000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-868000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:22:34.263756    5908 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:22:34.263889    5908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:34.263892    5908 out.go:358] Setting ErrFile to fd 2...
	I0906 12:22:34.263895    5908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:34.264035    5908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:22:34.265130    5908 out.go:352] Setting JSON to false
	I0906 12:22:34.282644    5908 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4924,"bootTime":1725645630,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:22:34.282721    5908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:22:34.287970    5908 out.go:177] * [offline-docker-868000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:22:34.295854    5908 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:22:34.295868    5908 notify.go:220] Checking for updates...
	I0906 12:22:34.303776    5908 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:22:34.306911    5908 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:22:34.309812    5908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:22:34.312794    5908 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:22:34.315819    5908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:22:34.319189    5908 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:22:34.319253    5908 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:22:34.322777    5908 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:22:34.329739    5908 start.go:297] selected driver: qemu2
	I0906 12:22:34.329750    5908 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:22:34.329756    5908 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:22:34.331656    5908 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:22:34.334806    5908 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:22:34.337944    5908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:22:34.337963    5908 cni.go:84] Creating CNI manager for ""
	I0906 12:22:34.337969    5908 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:22:34.337972    5908 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:22:34.337998    5908 start.go:340] cluster config:
	{Name:offline-docker-868000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-868000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client So
cketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:22:34.341777    5908 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:34.348823    5908 out.go:177] * Starting "offline-docker-868000" primary control-plane node in "offline-docker-868000" cluster
	I0906 12:22:34.352686    5908 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:22:34.352715    5908 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:22:34.352724    5908 cache.go:56] Caching tarball of preloaded images
	I0906 12:22:34.352802    5908 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:22:34.352807    5908 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:22:34.352869    5908 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/offline-docker-868000/config.json ...
	I0906 12:22:34.352881    5908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/offline-docker-868000/config.json: {Name:mk8aa7b1e15552fc3888d4b32d6eb8533990a11f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:22:34.353118    5908 start.go:360] acquireMachinesLock for offline-docker-868000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:34.353158    5908 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "offline-docker-868000"
	I0906 12:22:34.353169    5908 start.go:93] Provisioning new machine with config: &{Name:offline-docker-868000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-dock
er-868000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:22:34.353200    5908 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:22:34.356904    5908 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:22:34.372674    5908 start.go:159] libmachine.API.Create for "offline-docker-868000" (driver="qemu2")
	I0906 12:22:34.372705    5908 client.go:168] LocalClient.Create starting
	I0906 12:22:34.372768    5908 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:22:34.372799    5908 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:34.372809    5908 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:34.372849    5908 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:22:34.372871    5908 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:34.372878    5908 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:34.373288    5908 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:22:34.529995    5908 main.go:141] libmachine: Creating SSH key...
	I0906 12:22:34.720741    5908 main.go:141] libmachine: Creating Disk image...
	I0906 12:22:34.720757    5908 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:22:34.720947    5908 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2
	I0906 12:22:34.730684    5908 main.go:141] libmachine: STDOUT: 
	I0906 12:22:34.730706    5908 main.go:141] libmachine: STDERR: 
	I0906 12:22:34.730801    5908 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2 +20000M
	I0906 12:22:34.742204    5908 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:22:34.742222    5908 main.go:141] libmachine: STDERR: 
	I0906 12:22:34.742242    5908 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2
	I0906 12:22:34.742251    5908 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:22:34.742266    5908 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:22:34.742295    5908 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:b4:95:b9:23:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2
	I0906 12:22:34.743976    5908 main.go:141] libmachine: STDOUT: 
	I0906 12:22:34.743994    5908 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:34.744012    5908 client.go:171] duration metric: took 371.303375ms to LocalClient.Create
	I0906 12:22:36.746122    5908 start.go:128] duration metric: took 2.392932375s to createHost
	I0906 12:22:36.746146    5908 start.go:83] releasing machines lock for "offline-docker-868000", held for 2.393000292s
	W0906 12:22:36.746184    5908 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:36.761432    5908 out.go:177] * Deleting "offline-docker-868000" in qemu2 ...
	W0906 12:22:36.775466    5908 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:36.775477    5908 start.go:729] Will try again in 5 seconds ...
	I0906 12:22:41.777727    5908 start.go:360] acquireMachinesLock for offline-docker-868000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:41.778217    5908 start.go:364] duration metric: took 358.75µs to acquireMachinesLock for "offline-docker-868000"
	I0906 12:22:41.778368    5908 start.go:93] Provisioning new machine with config: &{Name:offline-docker-868000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-dock
er-868000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:22:41.778651    5908 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:22:41.797315    5908 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:22:41.848646    5908 start.go:159] libmachine.API.Create for "offline-docker-868000" (driver="qemu2")
	I0906 12:22:41.848705    5908 client.go:168] LocalClient.Create starting
	I0906 12:22:41.848821    5908 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:22:41.848893    5908 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:41.848913    5908 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:41.848976    5908 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:22:41.849020    5908 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:41.849032    5908 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:41.849599    5908 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:22:42.016782    5908 main.go:141] libmachine: Creating SSH key...
	I0906 12:22:42.054349    5908 main.go:141] libmachine: Creating Disk image...
	I0906 12:22:42.054359    5908 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:22:42.054553    5908 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2
	I0906 12:22:42.063647    5908 main.go:141] libmachine: STDOUT: 
	I0906 12:22:42.063666    5908 main.go:141] libmachine: STDERR: 
	I0906 12:22:42.063705    5908 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2 +20000M
	I0906 12:22:42.071404    5908 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:22:42.071426    5908 main.go:141] libmachine: STDERR: 
	I0906 12:22:42.071437    5908 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2
	I0906 12:22:42.071441    5908 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:22:42.071450    5908 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:22:42.071492    5908 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:4c:39:79:2e:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/offline-docker-868000/disk.qcow2
	I0906 12:22:42.073015    5908 main.go:141] libmachine: STDOUT: 
	I0906 12:22:42.073032    5908 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:42.073043    5908 client.go:171] duration metric: took 224.33575ms to LocalClient.Create
	I0906 12:22:44.075253    5908 start.go:128] duration metric: took 2.296573125s to createHost
	I0906 12:22:44.075498    5908 start.go:83] releasing machines lock for "offline-docker-868000", held for 2.297088875s
	W0906 12:22:44.075884    5908 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-868000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-868000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:44.087605    5908 out.go:201] 
	W0906 12:22:44.095675    5908 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:22:44.095701    5908 out.go:270] * 
	* 
	W0906 12:22:44.098146    5908 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:22:44.107526    5908 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-868000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-09-06 12:22:44.12208 -0700 PDT m=+3248.740128251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-868000 -n offline-docker-868000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-868000 -n offline-docker-868000: exit status 7 (66.993583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-868000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-868000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-868000
--- FAIL: TestOffline (10.05s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.659542ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-68fq8" [cbfa4ae6-52b4-4753-8931-dc75977f2b98] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006189208s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-77sc6" [6010aca8-2072-44fb-abeb-395ddabbb03a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0103045s
addons_test.go:342: (dbg) Run:  kubectl --context addons-439000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-439000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-439000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.053543916s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-439000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 ip
2024/09/06 11:42:13 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-439000 -n addons-439000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-666000 | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT |                     |
	|         | -p download-only-666000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT | 06 Sep 24 11:28 PDT |
	| delete  | -p download-only-666000              | download-only-666000 | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT | 06 Sep 24 11:28 PDT |
	| start   | -o=json --download-only              | download-only-782000 | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT |                     |
	|         | -p download-only-782000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-782000              | download-only-782000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-666000              | download-only-666000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-782000              | download-only-782000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| start   | --download-only -p                   | binary-mirror-065000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | binary-mirror-065000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-065000              | binary-mirror-065000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| addons  | disable dashboard -p                 | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | addons-439000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | addons-439000                        |                      |         |         |                     |                     |
	| start   | -p addons-439000 --wait=true         | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:32 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-439000 addons disable         | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:32 PDT | 06 Sep 24 11:33 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-439000 addons                 | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:41 PDT | 06 Sep 24 11:41 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-439000 addons                 | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:41 PDT | 06 Sep 24 11:41 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-439000 addons                 | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:41 PDT | 06 Sep 24 11:41 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:41 PDT | 06 Sep 24 11:42 PDT |
	|         | addons-439000                        |                      |         |         |                     |                     |
	| ssh     | addons-439000 ssh curl -s            | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:42 PDT | 06 Sep 24 11:42 PDT |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| ip      | addons-439000 ip                     | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:42 PDT | 06 Sep 24 11:42 PDT |
	| addons  | addons-439000 addons disable         | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:42 PDT | 06 Sep 24 11:42 PDT |
	|         | ingress-dns --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-439000 addons disable         | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:42 PDT |                     |
	|         | ingress --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| ip      | addons-439000 ip                     | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:42 PDT | 06 Sep 24 11:42 PDT |
	| addons  | addons-439000 addons disable         | addons-439000        | jenkins | v1.34.0 | 06 Sep 24 11:42 PDT | 06 Sep 24 11:42 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 11:29:03
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 11:29:03.138463    2772 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:29:03.138724    2772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:29:03.138727    2772 out.go:358] Setting ErrFile to fd 2...
	I0906 11:29:03.138729    2772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:29:03.138882    2772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 11:29:03.140138    2772 out.go:352] Setting JSON to false
	I0906 11:29:03.156467    2772 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1713,"bootTime":1725645630,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 11:29:03.156528    2772 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:29:03.160124    2772 out.go:177] * [addons-439000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 11:29:03.166998    2772 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 11:29:03.167046    2772 notify.go:220] Checking for updates...
	I0906 11:29:03.174072    2772 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 11:29:03.177035    2772 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 11:29:03.180061    2772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:29:03.183085    2772 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 11:29:03.186027    2772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 11:29:03.189200    2772 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:29:03.195916    2772 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 11:29:03.199041    2772 start.go:297] selected driver: qemu2
	I0906 11:29:03.199050    2772 start.go:901] validating driver "qemu2" against <nil>
	I0906 11:29:03.199058    2772 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 11:29:03.201222    2772 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 11:29:03.204056    2772 out.go:177] * Automatically selected the socket_vmnet network
	I0906 11:29:03.205397    2772 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 11:29:03.205452    2772 cni.go:84] Creating CNI manager for ""
	I0906 11:29:03.205460    2772 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:29:03.205472    2772 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 11:29:03.205503    2772 start.go:340] cluster config:
	{Name:addons-439000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/v
ar/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:29:03.208988    2772 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:29:03.217018    2772 out.go:177] * Starting "addons-439000" primary control-plane node in "addons-439000" cluster
	I0906 11:29:03.220974    2772 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:29:03.220993    2772 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 11:29:03.221002    2772 cache.go:56] Caching tarball of preloaded images
	I0906 11:29:03.221080    2772 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 11:29:03.221085    2772 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 11:29:03.221295    2772 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/config.json ...
	I0906 11:29:03.221306    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/config.json: {Name:mka04d0959b9e0d6989b9dfb8ff7248f9ae1fcce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:03.221670    2772 start.go:360] acquireMachinesLock for addons-439000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 11:29:03.221737    2772 start.go:364] duration metric: took 61.041µs to acquireMachinesLock for "addons-439000"
	I0906 11:29:03.221749    2772 start.go:93] Provisioning new machine with config: &{Name:addons-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-439000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 11:29:03.221791    2772 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 11:29:03.230008    2772 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 11:29:03.451154    2772 start.go:159] libmachine.API.Create for "addons-439000" (driver="qemu2")
	I0906 11:29:03.451195    2772 client.go:168] LocalClient.Create starting
	I0906 11:29:03.451389    2772 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 11:29:03.565223    2772 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 11:29:03.630899    2772 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 11:29:04.388995    2772 main.go:141] libmachine: Creating SSH key...
	I0906 11:29:04.490533    2772 main.go:141] libmachine: Creating Disk image...
	I0906 11:29:04.490539    2772 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 11:29:04.491498    2772 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/disk.qcow2
	I0906 11:29:04.507448    2772 main.go:141] libmachine: STDOUT: 
	I0906 11:29:04.507472    2772 main.go:141] libmachine: STDERR: 
	I0906 11:29:04.507520    2772 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/disk.qcow2 +20000M
	I0906 11:29:04.515469    2772 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 11:29:04.515483    2772 main.go:141] libmachine: STDERR: 
	I0906 11:29:04.515505    2772 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/disk.qcow2
	I0906 11:29:04.515514    2772 main.go:141] libmachine: Starting QEMU VM...
	I0906 11:29:04.515554    2772 qemu.go:418] Using hvf for hardware acceleration
	I0906 11:29:04.515587    2772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:66:c6:06:b7:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/disk.qcow2
	I0906 11:29:04.569971    2772 main.go:141] libmachine: STDOUT: 
	I0906 11:29:04.570003    2772 main.go:141] libmachine: STDERR: 
	I0906 11:29:04.570007    2772 main.go:141] libmachine: Attempt 0
	I0906 11:29:04.570038    2772 main.go:141] libmachine: Searching for f2:66:c6:6:b7:3b in /var/db/dhcpd_leases ...
	I0906 11:29:04.570105    2772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0906 11:29:04.570125    2772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66dc94ca}
	I0906 11:29:06.572259    2772 main.go:141] libmachine: Attempt 1
	I0906 11:29:06.572364    2772 main.go:141] libmachine: Searching for f2:66:c6:6:b7:3b in /var/db/dhcpd_leases ...
	I0906 11:29:06.572709    2772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0906 11:29:06.572761    2772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66dc94ca}
	I0906 11:29:08.574984    2772 main.go:141] libmachine: Attempt 2
	I0906 11:29:08.575152    2772 main.go:141] libmachine: Searching for f2:66:c6:6:b7:3b in /var/db/dhcpd_leases ...
	I0906 11:29:08.575481    2772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0906 11:29:08.575555    2772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66dc94ca}
	I0906 11:29:10.577697    2772 main.go:141] libmachine: Attempt 3
	I0906 11:29:10.577737    2772 main.go:141] libmachine: Searching for f2:66:c6:6:b7:3b in /var/db/dhcpd_leases ...
	I0906 11:29:10.577789    2772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0906 11:29:10.577815    2772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66dc94ca}
	I0906 11:29:12.579828    2772 main.go:141] libmachine: Attempt 4
	I0906 11:29:12.579838    2772 main.go:141] libmachine: Searching for f2:66:c6:6:b7:3b in /var/db/dhcpd_leases ...
	I0906 11:29:12.579875    2772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0906 11:29:12.579885    2772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66dc94ca}
	I0906 11:29:14.581917    2772 main.go:141] libmachine: Attempt 5
	I0906 11:29:14.581934    2772 main.go:141] libmachine: Searching for f2:66:c6:6:b7:3b in /var/db/dhcpd_leases ...
	I0906 11:29:14.581980    2772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0906 11:29:14.581989    2772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66dc94ca}
	I0906 11:29:16.584049    2772 main.go:141] libmachine: Attempt 6
	I0906 11:29:16.584077    2772 main.go:141] libmachine: Searching for f2:66:c6:6:b7:3b in /var/db/dhcpd_leases ...
	I0906 11:29:16.584140    2772 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0906 11:29:16.584149    2772 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66dc94ca}
	I0906 11:29:18.586221    2772 main.go:141] libmachine: Attempt 7
	I0906 11:29:18.586245    2772 main.go:141] libmachine: Searching for f2:66:c6:6:b7:3b in /var/db/dhcpd_leases ...
	I0906 11:29:18.586331    2772 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0906 11:29:18.586345    2772 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:66:c6:6:b7:3b ID:1,f2:66:c6:6:b7:3b Lease:0x66dc9b7d}
	I0906 11:29:18.586347    2772 main.go:141] libmachine: Found match: f2:66:c6:6:b7:3b
	I0906 11:29:18.586352    2772 main.go:141] libmachine: IP: 192.168.105.2
	I0906 11:29:18.586362    2772 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0906 11:29:20.604883    2772 machine.go:93] provisionDockerMachine start ...
	I0906 11:29:20.606399    2772 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:20.606904    2772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028185a0] 0x10281ae00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 11:29:20.606921    2772 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 11:29:20.682994    2772 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 11:29:20.683022    2772 buildroot.go:166] provisioning hostname "addons-439000"
	I0906 11:29:20.683112    2772 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:20.683428    2772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028185a0] 0x10281ae00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 11:29:20.683439    2772 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-439000 && echo "addons-439000" | sudo tee /etc/hostname
	I0906 11:29:20.752032    2772 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-439000
	
	I0906 11:29:20.752123    2772 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:20.752305    2772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028185a0] 0x10281ae00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 11:29:20.752315    2772 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-439000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-439000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-439000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 11:29:20.808772    2772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 11:29:20.808788    2772 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-2143/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-2143/.minikube}
	I0906 11:29:20.808800    2772 buildroot.go:174] setting up certificates
	I0906 11:29:20.808805    2772 provision.go:84] configureAuth start
	I0906 11:29:20.808810    2772 provision.go:143] copyHostCerts
	I0906 11:29:20.808899    2772 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem (1082 bytes)
	I0906 11:29:20.809153    2772 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem (1123 bytes)
	I0906 11:29:20.809265    2772 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem (1675 bytes)
	I0906 11:29:20.809361    2772 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem org=jenkins.addons-439000 san=[127.0.0.1 192.168.105.2 addons-439000 localhost minikube]
	I0906 11:29:21.019453    2772 provision.go:177] copyRemoteCerts
	I0906 11:29:21.019515    2772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 11:29:21.019523    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:21.047991    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 11:29:21.056971    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 11:29:21.065621    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 11:29:21.073824    2772 provision.go:87] duration metric: took 265.010292ms to configureAuth
	I0906 11:29:21.073834    2772 buildroot.go:189] setting minikube options for container-runtime
	I0906 11:29:21.073951    2772 config.go:182] Loaded profile config "addons-439000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:29:21.073987    2772 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:21.074074    2772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028185a0] 0x10281ae00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 11:29:21.074079    2772 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 11:29:21.124634    2772 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 11:29:21.124641    2772 buildroot.go:70] root file system type: tmpfs
	I0906 11:29:21.124699    2772 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 11:29:21.124749    2772 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:21.124852    2772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028185a0] 0x10281ae00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 11:29:21.124884    2772 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 11:29:21.181085    2772 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 11:29:21.181135    2772 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:21.181244    2772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028185a0] 0x10281ae00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 11:29:21.181254    2772 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 11:29:22.545267    2772 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 11:29:22.545281    2772 machine.go:96] duration metric: took 1.94039775s to provisionDockerMachine
	I0906 11:29:22.545286    2772 client.go:171] duration metric: took 19.094387625s to LocalClient.Create
	I0906 11:29:22.545303    2772 start.go:167] duration metric: took 19.094453125s to libmachine.API.Create "addons-439000"
	I0906 11:29:22.545307    2772 start.go:293] postStartSetup for "addons-439000" (driver="qemu2")
	I0906 11:29:22.545313    2772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 11:29:22.545384    2772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 11:29:22.545395    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:22.574476    2772 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 11:29:22.576303    2772 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 11:29:22.576310    2772 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-2143/.minikube/addons for local assets ...
	I0906 11:29:22.576410    2772 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-2143/.minikube/files for local assets ...
	I0906 11:29:22.576442    2772 start.go:296] duration metric: took 31.132208ms for postStartSetup
	I0906 11:29:22.576845    2772 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/config.json ...
	I0906 11:29:22.577034    2772 start.go:128] duration metric: took 19.355544667s to createHost
	I0906 11:29:22.577057    2772 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:22.577149    2772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028185a0] 0x10281ae00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 11:29:22.577154    2772 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 11:29:22.629033    2772 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725647363.107008753
	
	I0906 11:29:22.629043    2772 fix.go:216] guest clock: 1725647363.107008753
	I0906 11:29:22.629047    2772 fix.go:229] Guest: 2024-09-06 11:29:23.107008753 -0700 PDT Remote: 2024-09-06 11:29:22.577037 -0700 PDT m=+19.457878543 (delta=529.971753ms)
	I0906 11:29:22.629059    2772 fix.go:200] guest clock delta is within tolerance: 529.971753ms
	I0906 11:29:22.629062    2772 start.go:83] releasing machines lock for "addons-439000", held for 19.407626083s
	I0906 11:29:22.629350    2772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 11:29:22.629371    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:22.629351    2772 ssh_runner.go:195] Run: cat /version.json
	I0906 11:29:22.629381    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:22.703020    2772 ssh_runner.go:195] Run: systemctl --version
	I0906 11:29:22.705575    2772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 11:29:22.707713    2772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 11:29:22.707742    2772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 11:29:22.714265    2772 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 11:29:22.714272    2772 start.go:495] detecting cgroup driver to use...
	I0906 11:29:22.714370    2772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 11:29:22.721067    2772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 11:29:22.724502    2772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 11:29:22.728114    2772 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 11:29:22.728139    2772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 11:29:22.731669    2772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 11:29:22.735579    2772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 11:29:22.739433    2772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 11:29:22.743350    2772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 11:29:22.747344    2772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 11:29:22.751160    2772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 11:29:22.755103    2772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 11:29:22.759223    2772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 11:29:22.762929    2772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 11:29:22.766677    2772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:22.834529    2772 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 11:29:22.845073    2772 start.go:495] detecting cgroup driver to use...
	I0906 11:29:22.845145    2772 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 11:29:22.850539    2772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 11:29:22.855933    2772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 11:29:22.863783    2772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 11:29:22.869476    2772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 11:29:22.874633    2772 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 11:29:22.914720    2772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 11:29:22.920860    2772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 11:29:22.927204    2772 ssh_runner.go:195] Run: which cri-dockerd
	I0906 11:29:22.928585    2772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 11:29:22.931761    2772 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 11:29:22.937532    2772 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 11:29:23.002128    2772 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 11:29:23.068304    2772 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 11:29:23.068376    2772 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 11:29:23.074560    2772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:23.142463    2772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 11:29:25.330279    2772 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.187830458s)
	I0906 11:29:25.330355    2772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 11:29:25.336053    2772 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 11:29:25.343176    2772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 11:29:25.348701    2772 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 11:29:25.422870    2772 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 11:29:25.490895    2772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:25.564440    2772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 11:29:25.571241    2772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 11:29:25.576814    2772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:25.650269    2772 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 11:29:25.675487    2772 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 11:29:25.675562    2772 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 11:29:25.677744    2772 start.go:563] Will wait 60s for crictl version
	I0906 11:29:25.677791    2772 ssh_runner.go:195] Run: which crictl
	I0906 11:29:25.679382    2772 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 11:29:25.697947    2772 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 11:29:25.698009    2772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 11:29:25.709561    2772 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 11:29:25.727066    2772 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 11:29:25.727205    2772 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0906 11:29:25.728662    2772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 11:29:25.732698    2772 kubeadm.go:883] updating cluster {Name:addons-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-439000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 11:29:25.732744    2772 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:29:25.732786    2772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 11:29:25.744983    2772 docker.go:685] Got preloaded images: 
	I0906 11:29:25.744991    2772 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0906 11:29:25.745030    2772 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 11:29:25.748320    2772 ssh_runner.go:195] Run: which lz4
	I0906 11:29:25.749703    2772 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 11:29:25.750999    2772 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 11:29:25.751009    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322549298 bytes)
	I0906 11:29:27.000149    2772 docker.go:649] duration metric: took 1.250493708s to copy over tarball
	I0906 11:29:27.000206    2772 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 11:29:27.965798    2772 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 11:29:27.980924    2772 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 11:29:27.984437    2772 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0906 11:29:27.990485    2772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:28.071468    2772 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 11:29:30.737761    2772 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.666318875s)
	I0906 11:29:30.737850    2772 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 11:29:30.743593    2772 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 11:29:30.743605    2772 cache_images.go:84] Images are preloaded, skipping loading
	I0906 11:29:30.743610    2772 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.0 docker true true} ...
	I0906 11:29:30.743695    2772 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-439000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 11:29:30.743749    2772 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 11:29:30.765422    2772 cni.go:84] Creating CNI manager for ""
	I0906 11:29:30.765436    2772 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:29:30.765452    2772 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 11:29:30.765463    2772 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-439000 NodeName:addons-439000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 11:29:30.765539    2772 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-439000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 11:29:30.765602    2772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 11:29:30.769447    2772 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 11:29:30.769485    2772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 11:29:30.772740    2772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0906 11:29:30.778672    2772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 11:29:30.784649    2772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0906 11:29:30.790858    2772 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0906 11:29:30.792282    2772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 11:29:30.796012    2772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:30.857976    2772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 11:29:30.864774    2772 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000 for IP: 192.168.105.2
	I0906 11:29:30.864787    2772 certs.go:194] generating shared ca certs ...
	I0906 11:29:30.864796    2772 certs.go:226] acquiring lock for ca certs: {Name:mkeb2acf337d35e5b807329b963b0c0723ad2fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:30.864973    2772 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key
	I0906 11:29:31.035304    2772 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt ...
	I0906 11:29:31.035317    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt: {Name:mk76350c329c7bbe0b41f1b72c4754d965f96018 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:31.035648    2772 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key ...
	I0906 11:29:31.035652    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key: {Name:mk9d01bdd598787349ea5c570a188a9c9039cccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:31.035786    2772 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key
	I0906 11:29:31.092583    2772 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.crt ...
	I0906 11:29:31.092587    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.crt: {Name:mk9e3205858b2b873a67f3a36b684c5722a34d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:31.092752    2772 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key ...
	I0906 11:29:31.092755    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key: {Name:mk957b57e3ae01795bd1919fb95312401e5464fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:31.092888    2772 certs.go:256] generating profile certs ...
	I0906 11:29:31.092925    2772 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.key
	I0906 11:29:31.092943    2772 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt with IP's: []
	I0906 11:29:31.204389    2772 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt ...
	I0906 11:29:31.204393    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: {Name:mk8817fca02f5bb8044f93f17df81efd15aa19ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:31.204540    2772 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.key ...
	I0906 11:29:31.204544    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.key: {Name:mk3baa79fd8e74a95a4342be8ada477acbca393f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:31.204660    2772 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.key.ce722006
	I0906 11:29:31.204676    2772 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.crt.ce722006 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0906 11:29:31.320453    2772 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.crt.ce722006 ...
	I0906 11:29:31.320462    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.crt.ce722006: {Name:mke9d9da790f6c7736034751a29f49ff78679aa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:31.320700    2772 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.key.ce722006 ...
	I0906 11:29:31.320705    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.key.ce722006: {Name:mk7c2270c4406d033c88ad511515267018d4425c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:31.320835    2772 certs.go:381] copying /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.crt.ce722006 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.crt
	I0906 11:29:31.321031    2772 certs.go:385] copying /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.key.ce722006 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.key
	I0906 11:29:31.321161    2772 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/proxy-client.key
	I0906 11:29:31.321173    2772 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/proxy-client.crt with IP's: []
	I0906 11:29:31.523312    2772 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/proxy-client.crt ...
	I0906 11:29:31.523323    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/proxy-client.crt: {Name:mk29519bdd375fa89cfe9343b670b304738ba4bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:31.523582    2772 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/proxy-client.key ...
	I0906 11:29:31.523595    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/proxy-client.key: {Name:mk59f0d9afd1063ef978f645617fdb880a217f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:31.523901    2772 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 11:29:31.523937    2772 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem (1082 bytes)
	I0906 11:29:31.523975    2772 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem (1123 bytes)
	I0906 11:29:31.524007    2772 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem (1675 bytes)
	I0906 11:29:31.524495    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 11:29:31.535243    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 11:29:31.544005    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 11:29:31.554449    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 11:29:31.562950    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 11:29:31.571459    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 11:29:31.579985    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 11:29:31.588066    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 11:29:31.596154    2772 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 11:29:31.604366    2772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 11:29:31.611171    2772 ssh_runner.go:195] Run: openssl version
	I0906 11:29:31.613679    2772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 11:29:31.617456    2772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:29:31.619038    2772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:29:31.619059    2772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:29:31.621155    2772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 11:29:31.624754    2772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 11:29:31.626228    2772 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 11:29:31.626267    2772 kubeadm.go:392] StartCluster: {Name:addons-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-439000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:29:31.626334    2772 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 11:29:31.631606    2772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 11:29:31.635540    2772 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 11:29:31.639216    2772 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 11:29:31.642944    2772 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 11:29:31.642952    2772 kubeadm.go:157] found existing configuration files:
	
	I0906 11:29:31.642977    2772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 11:29:31.646535    2772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 11:29:31.646558    2772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 11:29:31.650280    2772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 11:29:31.653701    2772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 11:29:31.653726    2772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 11:29:31.656984    2772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 11:29:31.660057    2772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 11:29:31.660076    2772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 11:29:31.663427    2772 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 11:29:31.666975    2772 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 11:29:31.666998    2772 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 11:29:31.670644    2772 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 11:29:31.691714    2772 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 11:29:31.691753    2772 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 11:29:31.730716    2772 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 11:29:31.730774    2772 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 11:29:31.730836    2772 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 11:29:31.735025    2772 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 11:29:31.749225    2772 out.go:235]   - Generating certificates and keys ...
	I0906 11:29:31.749260    2772 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 11:29:31.749297    2772 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 11:29:31.889090    2772 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 11:29:31.922940    2772 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 11:29:31.963343    2772 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 11:29:32.053045    2772 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 11:29:32.152384    2772 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 11:29:32.152449    2772 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-439000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0906 11:29:32.221357    2772 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 11:29:32.221420    2772 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-439000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0906 11:29:32.344322    2772 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 11:29:32.561233    2772 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 11:29:32.683655    2772 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 11:29:32.683693    2772 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 11:29:32.728835    2772 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 11:29:32.765788    2772 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 11:29:32.814594    2772 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 11:29:32.899998    2772 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 11:29:33.053652    2772 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 11:29:33.053888    2772 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 11:29:33.055926    2772 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 11:29:33.060220    2772 out.go:235]   - Booting up control plane ...
	I0906 11:29:33.060273    2772 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 11:29:33.060312    2772 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 11:29:33.060346    2772 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 11:29:33.066511    2772 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 11:29:33.069019    2772 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 11:29:33.069087    2772 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 11:29:33.140489    2772 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 11:29:33.140550    2772 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 11:29:33.652031    2772 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.538959ms
	I0906 11:29:33.652265    2772 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 11:29:36.656166    2772 kubeadm.go:310] [api-check] The API server is healthy after 3.001874543s
	I0906 11:29:36.672902    2772 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 11:29:36.682506    2772 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 11:29:36.706097    2772 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 11:29:36.706235    2772 kubeadm.go:310] [mark-control-plane] Marking the node addons-439000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 11:29:36.709787    2772 kubeadm.go:310] [bootstrap-token] Using token: s2nuze.il1wzwwxq395d53q
	I0906 11:29:36.716809    2772 out.go:235]   - Configuring RBAC rules ...
	I0906 11:29:36.716890    2772 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 11:29:36.722009    2772 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 11:29:36.724537    2772 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 11:29:36.725427    2772 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 11:29:36.726952    2772 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 11:29:36.727978    2772 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 11:29:37.062174    2772 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 11:29:37.465475    2772 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 11:29:38.061634    2772 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 11:29:38.062655    2772 kubeadm.go:310] 
	I0906 11:29:38.062735    2772 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 11:29:38.062744    2772 kubeadm.go:310] 
	I0906 11:29:38.062866    2772 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 11:29:38.062885    2772 kubeadm.go:310] 
	I0906 11:29:38.062916    2772 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 11:29:38.062982    2772 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 11:29:38.063080    2772 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 11:29:38.063087    2772 kubeadm.go:310] 
	I0906 11:29:38.063179    2772 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 11:29:38.063188    2772 kubeadm.go:310] 
	I0906 11:29:38.063245    2772 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 11:29:38.063251    2772 kubeadm.go:310] 
	I0906 11:29:38.063319    2772 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 11:29:38.063466    2772 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 11:29:38.063559    2772 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 11:29:38.063577    2772 kubeadm.go:310] 
	I0906 11:29:38.063716    2772 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 11:29:38.063824    2772 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 11:29:38.063847    2772 kubeadm.go:310] 
	I0906 11:29:38.064030    2772 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token s2nuze.il1wzwwxq395d53q \
	I0906 11:29:38.064175    2772 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59dd8c6c8a0580995b4e71517efb0052c1c9fa2a3d3304f8b7b5bc84a0bff0c5 \
	I0906 11:29:38.064204    2772 kubeadm.go:310] 	--control-plane 
	I0906 11:29:38.064212    2772 kubeadm.go:310] 
	I0906 11:29:38.064312    2772 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 11:29:38.064330    2772 kubeadm.go:310] 
	I0906 11:29:38.064425    2772 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s2nuze.il1wzwwxq395d53q \
	I0906 11:29:38.064565    2772 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59dd8c6c8a0580995b4e71517efb0052c1c9fa2a3d3304f8b7b5bc84a0bff0c5 
	I0906 11:29:38.064962    2772 kubeadm.go:310] W0906 18:29:32.168296    1584 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 11:29:38.065354    2772 kubeadm.go:310] W0906 18:29:32.169049    1584 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 11:29:38.065511    2772 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 11:29:38.065526    2772 cni.go:84] Creating CNI manager for ""
	I0906 11:29:38.065544    2772 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:29:38.071075    2772 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 11:29:38.075047    2772 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 11:29:38.084062    2772 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 11:29:38.096973    2772 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 11:29:38.097083    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:38.097127    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-439000 minikube.k8s.io/updated_at=2024_09_06T11_29_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=addons-439000 minikube.k8s.io/primary=true
	I0906 11:29:38.178182    2772 ops.go:34] apiserver oom_adj: -16
	I0906 11:29:38.178232    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:38.680425    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:39.180374    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:39.679619    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:40.180293    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:40.680279    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:41.180368    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:41.680389    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:42.180324    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:42.680240    2772 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:42.724179    2772 kubeadm.go:1113] duration metric: took 4.627272708s to wait for elevateKubeSystemPrivileges
	I0906 11:29:42.724193    2772 kubeadm.go:394] duration metric: took 11.098102083s to StartCluster
	I0906 11:29:42.724203    2772 settings.go:142] acquiring lock: {Name:mk12afd771d0c660db2e89d96a6968c1a28fb2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:42.724381    2772 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 11:29:42.724565    2772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/kubeconfig: {Name:mkb103f2b581179fd959f22a1dc4c9c6720f9366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:42.724794    2772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 11:29:42.724830    2772 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 11:29:42.724867    2772 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0906 11:29:42.724912    2772 addons.go:69] Setting yakd=true in profile "addons-439000"
	I0906 11:29:42.724921    2772 addons.go:234] Setting addon yakd=true in "addons-439000"
	I0906 11:29:42.724937    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.724939    2772 config.go:182] Loaded profile config "addons-439000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:29:42.724940    2772 addons.go:69] Setting inspektor-gadget=true in profile "addons-439000"
	I0906 11:29:42.724951    2772 addons.go:234] Setting addon inspektor-gadget=true in "addons-439000"
	I0906 11:29:42.724963    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.724971    2772 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-439000"
	I0906 11:29:42.724979    2772 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-439000"
	I0906 11:29:42.724988    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.724994    2772 addons.go:69] Setting storage-provisioner=true in profile "addons-439000"
	I0906 11:29:42.725018    2772 addons.go:69] Setting volumesnapshots=true in profile "addons-439000"
	I0906 11:29:42.725026    2772 addons.go:69] Setting metrics-server=true in profile "addons-439000"
	I0906 11:29:42.725029    2772 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-439000"
	I0906 11:29:42.725031    2772 addons.go:234] Setting addon volumesnapshots=true in "addons-439000"
	I0906 11:29:42.725029    2772 addons.go:234] Setting addon storage-provisioner=true in "addons-439000"
	I0906 11:29:42.725035    2772 addons.go:69] Setting gcp-auth=true in profile "addons-439000"
	I0906 11:29:42.725015    2772 addons.go:69] Setting volcano=true in profile "addons-439000"
	I0906 11:29:42.725043    2772 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-439000"
	I0906 11:29:42.725049    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.725051    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.725058    2772 mustload.go:65] Loading cluster: addons-439000
	I0906 11:29:42.725058    2772 addons.go:234] Setting addon volcano=true in "addons-439000"
	I0906 11:29:42.725078    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.725094    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.725114    2772 config.go:182] Loaded profile config "addons-439000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:29:42.725193    2772 addons.go:69] Setting default-storageclass=true in profile "addons-439000"
	I0906 11:29:42.725213    2772 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-439000"
	I0906 11:29:42.725349    2772 addons.go:69] Setting ingress=true in profile "addons-439000"
	I0906 11:29:42.725352    2772 retry.go:31] will retry after 844.357477ms: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.725357    2772 addons.go:234] Setting addon ingress=true in "addons-439000"
	I0906 11:29:42.725360    2772 retry.go:31] will retry after 1.404254304s: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.725358    2772 retry.go:31] will retry after 1.467404153s: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.725368    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.725364    2772 addons.go:69] Setting ingress-dns=true in profile "addons-439000"
	I0906 11:29:42.725392    2772 addons.go:234] Setting addon ingress-dns=true in "addons-439000"
	I0906 11:29:42.725400    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.725474    2772 retry.go:31] will retry after 1.132094872s: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.725502    2772 addons.go:69] Setting registry=true in profile "addons-439000"
	I0906 11:29:42.725526    2772 addons.go:234] Setting addon registry=true in "addons-439000"
	I0906 11:29:42.725555    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.725641    2772 retry.go:31] will retry after 668.586579ms: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.725643    2772 retry.go:31] will retry after 878.68708ms: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.725641    2772 retry.go:31] will retry after 634.266146ms: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.725645    2772 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-439000"
	I0906 11:29:42.725654    2772 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-439000"
	I0906 11:29:42.725027    2772 addons.go:69] Setting cloud-spanner=true in profile "addons-439000"
	I0906 11:29:42.725641    2772 retry.go:31] will retry after 977.851049ms: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.725726    2772 addons.go:234] Setting addon cloud-spanner=true in "addons-439000"
	I0906 11:29:42.725752    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.725033    2772 addons.go:234] Setting addon metrics-server=true in "addons-439000"
	I0906 11:29:42.725781    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.725789    2772 retry.go:31] will retry after 955.337142ms: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.725804    2772 retry.go:31] will retry after 963.83028ms: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.725924    2772 retry.go:31] will retry after 1.208932678s: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.725977    2772 retry.go:31] will retry after 1.415354927s: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.726032    2772 retry.go:31] will retry after 1.019190657s: connect: dial unix /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/monitor: connect: connection refused
	I0906 11:29:42.727812    2772 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-439000"
	I0906 11:29:42.728869    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:42.728664    2772 out.go:177] * Verifying Kubernetes components...
	I0906 11:29:42.735408    2772 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 11:29:42.739614    2772 out.go:177]   - Using image docker.io/busybox:stable
	I0906 11:29:42.739643    2772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:42.742637    2772 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 11:29:42.742642    2772 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 11:29:42.742657    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:42.749627    2772 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0906 11:29:42.753684    2772 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 11:29:42.753690    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0906 11:29:42.753698    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:42.782141    2772 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 11:29:42.851399    2772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 11:29:42.864673    2772 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 11:29:42.864686    2772 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 11:29:42.871878    2772 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 11:29:42.871890    2772 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 11:29:42.877375    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 11:29:42.918267    2772 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 11:29:42.918280    2772 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 11:29:42.981379    2772 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 11:29:42.981391    2772 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 11:29:43.012480    2772 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 11:29:43.012489    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 11:29:43.030887    2772 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0906 11:29:43.031346    2772 node_ready.go:35] waiting up to 6m0s for node "addons-439000" to be "Ready" ...
	I0906 11:29:43.036974    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 11:29:43.037573    2772 node_ready.go:49] node "addons-439000" has status "Ready":"True"
	I0906 11:29:43.037592    2772 node_ready.go:38] duration metric: took 6.226ms for node "addons-439000" to be "Ready" ...
	I0906 11:29:43.037597    2772 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 11:29:43.051010    2772 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace to be "Ready" ...
	I0906 11:29:43.366969    2772 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0906 11:29:43.373933    2772 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0906 11:29:43.383897    2772 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0906 11:29:43.388314    2772 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 11:29:43.388325    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0906 11:29:43.388336    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:43.397943    2772 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0906 11:29:43.401941    2772 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 11:29:43.401949    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0906 11:29:43.401960    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:43.455055    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 11:29:43.470193    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 11:29:43.534829    2772 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-439000" context rescaled to 1 replicas
	I0906 11:29:43.572634    2772 addons.go:234] Setting addon default-storageclass=true in "addons-439000"
	I0906 11:29:43.572654    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:43.573249    2772 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 11:29:43.573258    2772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 11:29:43.573266    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:43.607947    2772 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0906 11:29:43.611978    2772 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 11:29:43.611990    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 11:29:43.612002    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:43.682042    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 11:29:43.686695    2772 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0906 11:29:43.689926    2772 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 11:29:43.693892    2772 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 11:29:43.696915    2772 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 11:29:43.696923    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0906 11:29:43.696933    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:43.701920    2772 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0906 11:29:43.705869    2772 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 11:29:43.705879    2772 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 11:29:43.705891    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:43.706184    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 11:29:43.710882    2772 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 11:29:43.714949    2772 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 11:29:43.714959    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 11:29:43.714970    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:43.769766    2772 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0906 11:29:43.773015    2772 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0906 11:29:43.773022    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0906 11:29:43.773031    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:43.860166    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:43.940964    2772 out.go:177]   - Using image docker.io/registry:2.8.3
	I0906 11:29:43.944912    2772 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0906 11:29:43.950952    2772 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 11:29:43.950962    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0906 11:29:43.950973    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:43.970801    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 11:29:44.004359    2772 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 11:29:44.004371    2772 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	W0906 11:29:44.018005    2772 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 11:29:44.018026    2772 retry.go:31] will retry after 318.130113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 11:29:44.024978    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 11:29:44.119595    2772 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 11:29:44.119608    2772 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 11:29:44.126685    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 11:29:44.134965    2772 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 11:29:44.137900    2772 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 11:29:44.147896    2772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 11:29:44.147898    2772 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 11:29:44.147990    2772 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 11:29:44.155887    2772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 11:29:44.155893    2772 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0906 11:29:44.162808    2772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 11:29:44.162809    2772 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 11:29:44.162904    2772 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 11:29:44.162916    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:44.172910    2772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 11:29:44.179814    2772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 11:29:44.183949    2772 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 11:29:44.187923    2772 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 11:29:44.187937    2772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 11:29:44.187951    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:44.197853    2772 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0906 11:29:44.201915    2772 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0906 11:29:44.201926    2772 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0906 11:29:44.201937    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:44.210037    2772 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 11:29:44.210049    2772 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 11:29:44.228076    2772 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 11:29:44.228084    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 11:29:44.228446    2772 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 11:29:44.228452    2772 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 11:29:44.235275    2772 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 11:29:44.235287    2772 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 11:29:44.244467    2772 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 11:29:44.244480    2772 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 11:29:44.286729    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 11:29:44.304498    2772 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 11:29:44.304509    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 11:29:44.306053    2772 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 11:29:44.306059    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0906 11:29:44.313672    2772 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 11:29:44.313682    2772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 11:29:44.328198    2772 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0906 11:29:44.328210    2772 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0906 11:29:44.337550    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 11:29:44.340656    2772 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 11:29:44.340664    2772 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 11:29:44.341162    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 11:29:44.368970    2772 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 11:29:44.368983    2772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 11:29:44.393492    2772 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0906 11:29:44.393530    2772 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0906 11:29:44.417033    2772 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 11:29:44.417047    2772 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 11:29:44.500177    2772 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 11:29:44.500189    2772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 11:29:44.511872    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 11:29:44.536466    2772 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0906 11:29:44.536480    2772 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0906 11:29:44.590060    2772 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 11:29:44.590076    2772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 11:29:44.662599    2772 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0906 11:29:44.662608    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0906 11:29:44.694682    2772 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 11:29:44.694694    2772 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 11:29:44.701919    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0906 11:29:44.770135    2772 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 11:29:44.770146    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 11:29:44.887320    2772 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 11:29:44.887336    2772 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 11:29:45.020967    2772 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 11:29:45.020979    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 11:29:45.053862    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:29:45.157621    2772 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 11:29:45.157631    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 11:29:45.229082    2772 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 11:29:45.229094    2772 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 11:29:45.293260    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 11:29:47.070475    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:29:47.231262    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.77625225s)
	I0906 11:29:47.231262    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.761109042s)
	I0906 11:29:47.231282    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.549285542s)
	I0906 11:29:47.231342    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.5252055s)
	I0906 11:29:47.231389    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.260628667s)
	I0906 11:29:47.231409    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.206473417s)
	I0906 11:29:47.231411    2772 addons.go:475] Verifying addon ingress=true in "addons-439000"
	I0906 11:29:47.231421    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.104776792s)
	I0906 11:29:47.231444    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.94474575s)
	I0906 11:29:47.231527    2772 addons.go:475] Verifying addon registry=true in "addons-439000"
	I0906 11:29:47.237408    2772 out.go:177] * Verifying ingress addon...
	I0906 11:29:47.244436    2772 out.go:177] * Verifying registry addon...
	I0906 11:29:47.252960    2772 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 11:29:47.258879    2772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 11:29:47.276821    2772 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 11:29:47.276830    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:47.276993    2772 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 11:29:47.276999    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:47.766180    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:47.772608    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:47.833065    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.491942167s)
	I0906 11:29:47.833090    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.321257916s)
	I0906 11:29:47.833100    2772 addons.go:475] Verifying addon metrics-server=true in "addons-439000"
	I0906 11:29:47.833100    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.131222709s)
	I0906 11:29:47.833180    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.539948s)
	I0906 11:29:47.833186    2772 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-439000"
	I0906 11:29:47.833437    2772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.49591125s)
	I0906 11:29:47.837389    2772 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-439000 service yakd-dashboard -n yakd-dashboard
	
	I0906 11:29:47.840409    2772 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 11:29:47.848771    2772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 11:29:47.874109    2772 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 11:29:47.874117    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:48.258293    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:48.261034    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:48.362030    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:48.757197    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:48.760574    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:48.853092    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:49.257152    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:49.260669    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:49.360981    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:49.555750    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:29:49.757311    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:49.760580    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:49.853019    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:50.320625    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:50.321031    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:50.423196    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:50.757387    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:50.760917    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:50.852494    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:51.256853    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:51.260472    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:51.353129    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:51.555842    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:29:51.665052    2772 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 11:29:51.665069    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:51.717650    2772 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 11:29:51.729633    2772 addons.go:234] Setting addon gcp-auth=true in "addons-439000"
	I0906 11:29:51.729667    2772 host.go:66] Checking if "addons-439000" exists ...
	I0906 11:29:51.730422    2772 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 11:29:51.730432    2772 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/addons-439000/id_rsa Username:docker}
	I0906 11:29:51.756454    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:51.760932    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:51.797064    2772 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0906 11:29:51.809920    2772 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 11:29:51.819932    2772 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 11:29:51.819939    2772 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 11:29:51.826269    2772 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 11:29:51.826280    2772 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 11:29:51.832398    2772 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 11:29:51.832405    2772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0906 11:29:51.838193    2772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 11:29:51.921077    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:52.061160    2772 addons.go:475] Verifying addon gcp-auth=true in "addons-439000"
	I0906 11:29:52.066911    2772 out.go:177] * Verifying gcp-auth addon...
	I0906 11:29:52.077517    2772 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 11:29:52.078708    2772 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 11:29:52.257405    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:52.260659    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:52.352985    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:52.757053    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:52.760847    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:52.853375    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:53.256965    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:53.260976    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:53.352941    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:53.764196    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:53.764451    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:53.868619    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:54.055600    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:29:54.257757    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:54.260571    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:54.359516    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:54.757059    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:54.760864    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:54.853353    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:55.257659    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:55.260419    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:55.353294    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:55.756784    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:55.760682    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:55.853038    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:56.257030    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:56.260490    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:56.352813    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:56.555703    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:29:56.757084    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:56.760444    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:56.853063    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:57.256887    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:57.260512    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:57.518498    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:57.757041    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:57.760622    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:57.852933    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:58.257018    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:58.260602    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:58.352975    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:58.555861    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:29:58.756839    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:58.760510    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:58.853018    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:59.256857    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:59.260522    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:59.362618    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:29:59.757001    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:29:59.760476    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:29:59.858459    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:00.257929    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:00.260790    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:00.352557    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:00.756756    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:00.760503    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:00.852709    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:01.056252    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:01.256559    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:01.260914    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:01.352778    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:01.756820    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:01.760842    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:01.853202    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:02.256500    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:02.260519    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:02.351840    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:02.756907    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:02.760276    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:02.852779    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:03.111844    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:03.257635    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:03.260857    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:03.353332    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:03.756829    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:03.760672    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:03.854439    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:04.257930    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:04.260286    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:04.354203    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:04.762966    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:04.763499    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:04.865294    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:05.257063    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:05.260865    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:05.353478    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:05.555489    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:05.756950    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:05.760988    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:05.862041    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:06.256628    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:06.260558    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:06.351092    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:06.756660    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:06.760432    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:06.857581    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:07.255217    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:07.260896    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:07.352922    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:07.756722    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:07.760436    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:07.852944    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:08.057190    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:08.256946    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:08.260259    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:08.352891    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:08.756747    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:08.760580    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:08.852873    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:09.418809    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:09.419127    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:09.419155    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:09.759202    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:09.761265    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:09.858603    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:10.057481    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:10.257899    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:10.260842    2772 kapi.go:107] duration metric: took 23.002327125s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 11:30:10.355103    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:10.757006    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:10.853358    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:11.256936    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:11.351897    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:11.754629    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:11.853132    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:12.255564    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:12.352694    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:12.554962    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:12.756723    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:12.852546    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:13.256722    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:13.352940    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:13.791942    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:13.853212    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:14.255058    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:14.351774    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:14.559292    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:14.757000    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:14.852975    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:15.256639    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:15.352783    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:15.766811    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:15.858886    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:16.256920    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:16.352628    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:16.756689    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:16.852530    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:17.055121    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:17.256795    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:17.353702    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:17.756932    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:17.852796    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:18.256665    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:18.352796    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:18.757521    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:18.853064    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:19.256971    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:19.353290    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:19.555402    2772 pod_ready.go:103] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:19.757024    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:19.851577    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:20.255877    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:20.352536    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:20.756734    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:20.852451    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:21.256535    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:21.352493    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:21.555018    2772 pod_ready.go:93] pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:21.555028    2772 pod_ready.go:82] duration metric: took 38.504608833s for pod "coredns-6f6b679f8f-gn6kg" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:21.555033    2772 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-m6cpg" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:21.555802    2772 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-m6cpg" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-m6cpg" not found
	I0906 11:30:21.555808    2772 pod_ready.go:82] duration metric: took 772.208µs for pod "coredns-6f6b679f8f-m6cpg" in "kube-system" namespace to be "Ready" ...
	E0906 11:30:21.555812    2772 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-m6cpg" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-m6cpg" not found
	I0906 11:30:21.555815    2772 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:21.557987    2772 pod_ready.go:93] pod "etcd-addons-439000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:21.557995    2772 pod_ready.go:82] duration metric: took 2.176583ms for pod "etcd-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:21.557999    2772 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:21.559884    2772 pod_ready.go:93] pod "kube-apiserver-addons-439000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:21.559889    2772 pod_ready.go:82] duration metric: took 1.886125ms for pod "kube-apiserver-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:21.559892    2772 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:21.561818    2772 pod_ready.go:93] pod "kube-controller-manager-addons-439000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:21.561822    2772 pod_ready.go:82] duration metric: took 1.92725ms for pod "kube-controller-manager-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:21.561826    2772 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bp9v8" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:21.756179    2772 pod_ready.go:93] pod "kube-proxy-bp9v8" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:21.756190    2772 pod_ready.go:82] duration metric: took 194.364625ms for pod "kube-proxy-bp9v8" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:21.756196    2772 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:21.756724    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:21.852335    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:22.156091    2772 pod_ready.go:93] pod "kube-scheduler-addons-439000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:22.156100    2772 pod_ready.go:82] duration metric: took 399.906708ms for pod "kube-scheduler-addons-439000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:22.156104    2772 pod_ready.go:39] duration metric: took 39.119118583s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 11:30:22.156113    2772 api_server.go:52] waiting for apiserver process to appear ...
	I0906 11:30:22.156195    2772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 11:30:22.163015    2772 api_server.go:72] duration metric: took 39.438796708s to wait for apiserver process to appear ...
	I0906 11:30:22.163023    2772 api_server.go:88] waiting for apiserver healthz status ...
	I0906 11:30:22.163031    2772 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0906 11:30:22.165632    2772 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0906 11:30:22.166145    2772 api_server.go:141] control plane version: v1.31.0
	I0906 11:30:22.166154    2772 api_server.go:131] duration metric: took 3.1285ms to wait for apiserver health ...
	I0906 11:30:22.166158    2772 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 11:30:22.256636    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:22.352674    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:22.356866    2772 system_pods.go:59] 17 kube-system pods found
	I0906 11:30:22.356874    2772 system_pods.go:61] "coredns-6f6b679f8f-gn6kg" [916b944d-f0f7-4090-93e0-4190f5128fc0] Running
	I0906 11:30:22.356878    2772 system_pods.go:61] "csi-hostpath-attacher-0" [275b1d8f-facd-4753-b3d4-58f7d4f96d8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 11:30:22.356882    2772 system_pods.go:61] "csi-hostpath-resizer-0" [83f74478-af65-41f8-951c-a83954edd33c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 11:30:22.356886    2772 system_pods.go:61] "csi-hostpathplugin-j4xdr" [1153206a-9b54-4a28-8ee7-fdecdd452fc6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 11:30:22.356889    2772 system_pods.go:61] "etcd-addons-439000" [27de6062-df3b-4e58-84fb-3bb839ede04b] Running
	I0906 11:30:22.356891    2772 system_pods.go:61] "kube-apiserver-addons-439000" [e18df9d4-9fa4-49dc-a1ec-c194f34b2398] Running
	I0906 11:30:22.356893    2772 system_pods.go:61] "kube-controller-manager-addons-439000" [8d967d80-6525-423a-970f-a21e3b116877] Running
	I0906 11:30:22.356895    2772 system_pods.go:61] "kube-ingress-dns-minikube" [33539311-1fe9-4e25-a9f6-99843707bfe8] Running
	I0906 11:30:22.356898    2772 system_pods.go:61] "kube-proxy-bp9v8" [3c351e6c-65a7-4137-ac3f-19d35c04fbd5] Running
	I0906 11:30:22.356900    2772 system_pods.go:61] "kube-scheduler-addons-439000" [a4b61d8e-4878-4174-aaf4-21b6ef6ff2b3] Running
	I0906 11:30:22.356902    2772 system_pods.go:61] "metrics-server-84c5f94fbc-fjw2z" [7946e3d2-10e1-49f2-a6ba-3c9e7340a22e] Running
	I0906 11:30:22.356904    2772 system_pods.go:61] "nvidia-device-plugin-daemonset-nlzkn" [1150cb9e-9cc1-4002-99b8-1f2bafe93a02] Running
	I0906 11:30:22.356905    2772 system_pods.go:61] "registry-6fb4cdfc84-68fq8" [cbfa4ae6-52b4-4753-8931-dc75977f2b98] Running
	I0906 11:30:22.356907    2772 system_pods.go:61] "registry-proxy-77sc6" [6010aca8-2072-44fb-abeb-395ddabbb03a] Running
	I0906 11:30:22.356909    2772 system_pods.go:61] "snapshot-controller-56fcc65765-gttxc" [6fe5f98b-0e97-4ee4-b39c-89821ce9960a] Running
	I0906 11:30:22.356918    2772 system_pods.go:61] "snapshot-controller-56fcc65765-s9h27" [d571a725-eeba-4e8b-a7eb-429ea80cc35b] Running
	I0906 11:30:22.356922    2772 system_pods.go:61] "storage-provisioner" [2b31b6f3-ae26-4690-b2eb-d305e4a6b3c5] Running
	I0906 11:30:22.356925    2772 system_pods.go:74] duration metric: took 190.767583ms to wait for pod list to return data ...
	I0906 11:30:22.356929    2772 default_sa.go:34] waiting for default service account to be created ...
	I0906 11:30:22.555948    2772 default_sa.go:45] found service account: "default"
	I0906 11:30:22.555962    2772 default_sa.go:55] duration metric: took 199.03175ms for default service account to be created ...
	I0906 11:30:22.555966    2772 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 11:30:22.758510    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:22.760348    2772 system_pods.go:86] 17 kube-system pods found
	I0906 11:30:22.760358    2772 system_pods.go:89] "coredns-6f6b679f8f-gn6kg" [916b944d-f0f7-4090-93e0-4190f5128fc0] Running
	I0906 11:30:22.760362    2772 system_pods.go:89] "csi-hostpath-attacher-0" [275b1d8f-facd-4753-b3d4-58f7d4f96d8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 11:30:22.760366    2772 system_pods.go:89] "csi-hostpath-resizer-0" [83f74478-af65-41f8-951c-a83954edd33c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 11:30:22.760373    2772 system_pods.go:89] "csi-hostpathplugin-j4xdr" [1153206a-9b54-4a28-8ee7-fdecdd452fc6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 11:30:22.760376    2772 system_pods.go:89] "etcd-addons-439000" [27de6062-df3b-4e58-84fb-3bb839ede04b] Running
	I0906 11:30:22.760379    2772 system_pods.go:89] "kube-apiserver-addons-439000" [e18df9d4-9fa4-49dc-a1ec-c194f34b2398] Running
	I0906 11:30:22.760381    2772 system_pods.go:89] "kube-controller-manager-addons-439000" [8d967d80-6525-423a-970f-a21e3b116877] Running
	I0906 11:30:22.760384    2772 system_pods.go:89] "kube-ingress-dns-minikube" [33539311-1fe9-4e25-a9f6-99843707bfe8] Running
	I0906 11:30:22.760386    2772 system_pods.go:89] "kube-proxy-bp9v8" [3c351e6c-65a7-4137-ac3f-19d35c04fbd5] Running
	I0906 11:30:22.760388    2772 system_pods.go:89] "kube-scheduler-addons-439000" [a4b61d8e-4878-4174-aaf4-21b6ef6ff2b3] Running
	I0906 11:30:22.760390    2772 system_pods.go:89] "metrics-server-84c5f94fbc-fjw2z" [7946e3d2-10e1-49f2-a6ba-3c9e7340a22e] Running
	I0906 11:30:22.760393    2772 system_pods.go:89] "nvidia-device-plugin-daemonset-nlzkn" [1150cb9e-9cc1-4002-99b8-1f2bafe93a02] Running
	I0906 11:30:22.760394    2772 system_pods.go:89] "registry-6fb4cdfc84-68fq8" [cbfa4ae6-52b4-4753-8931-dc75977f2b98] Running
	I0906 11:30:22.760398    2772 system_pods.go:89] "registry-proxy-77sc6" [6010aca8-2072-44fb-abeb-395ddabbb03a] Running
	I0906 11:30:22.760400    2772 system_pods.go:89] "snapshot-controller-56fcc65765-gttxc" [6fe5f98b-0e97-4ee4-b39c-89821ce9960a] Running
	I0906 11:30:22.760407    2772 system_pods.go:89] "snapshot-controller-56fcc65765-s9h27" [d571a725-eeba-4e8b-a7eb-429ea80cc35b] Running
	I0906 11:30:22.760409    2772 system_pods.go:89] "storage-provisioner" [2b31b6f3-ae26-4690-b2eb-d305e4a6b3c5] Running
	I0906 11:30:22.760413    2772 system_pods.go:126] duration metric: took 204.447542ms to wait for k8s-apps to be running ...
	I0906 11:30:22.760418    2772 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 11:30:22.760466    2772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 11:30:22.766928    2772 system_svc.go:56] duration metric: took 6.507833ms WaitForService to wait for kubelet
	I0906 11:30:22.766937    2772 kubeadm.go:582] duration metric: took 40.042728375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 11:30:22.766946    2772 node_conditions.go:102] verifying NodePressure condition ...
	I0906 11:30:22.852414    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:22.957033    2772 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 11:30:22.957045    2772 node_conditions.go:123] node cpu capacity is 2
	I0906 11:30:22.957051    2772 node_conditions.go:105] duration metric: took 190.105834ms to run NodePressure ...
	I0906 11:30:22.957057    2772 start.go:241] waiting for startup goroutines ...
	I0906 11:30:23.256602    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:23.352765    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:23.758640    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:23.853739    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:24.256620    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:24.352869    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:24.756583    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:24.852658    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:25.256677    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:25.352762    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:25.757013    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:25.853233    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:26.257249    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:26.354238    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:26.756353    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:26.852160    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:27.256407    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:27.352449    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:27.756958    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:27.852500    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:28.256814    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:28.353605    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:28.755533    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:28.858476    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:29.256596    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:29.352379    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:29.756425    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:29.852356    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:30.256695    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:30.352922    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:30.756849    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:30.852095    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:31.256708    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:31.352155    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:31.756509    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:31.852360    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:32.257452    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:32.353535    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:32.756456    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:32.852579    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:33.256243    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:33.352984    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:33.756385    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:33.852489    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:34.255341    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:34.350848    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:34.756641    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:34.852090    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:35.256177    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:35.352388    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:35.756922    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:35.858103    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:36.256428    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:36.357678    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:36.755813    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:36.852358    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:37.256767    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:37.352713    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:37.756308    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:37.852049    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:38.256480    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:38.352079    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:38.756541    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:38.856757    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:39.257002    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:39.353110    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:39.757036    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:39.852833    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:40.256180    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:40.352392    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:40.756523    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:40.852433    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:41.256549    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:41.352102    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:41.756618    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:41.852328    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:42.256169    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:42.351949    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:42.756292    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:42.852346    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:43.256677    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:43.353120    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:43.757429    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:43.853999    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:44.256397    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:44.352123    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:44.756377    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:44.852322    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:45.255473    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:45.352318    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:45.756655    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:45.853865    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:46.258944    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:46.352946    2772 kapi.go:107] duration metric: took 58.505096958s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 11:30:46.761515    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:47.257406    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:47.757841    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:48.261498    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:48.759796    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:49.268593    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:49.761413    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:50.263791    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:50.756770    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:51.257042    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:51.757684    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:52.256345    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:52.755681    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:53.255970    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:53.756075    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:54.328391    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:54.756191    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:55.256323    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:55.770730    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:56.256209    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:56.755600    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:57.256375    2772 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:57.756575    2772 kapi.go:107] duration metric: took 1m10.504728167s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 11:31:14.580357    2772 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 11:31:14.580370    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:15.081395    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:15.582587    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:16.081782    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:16.581443    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:17.082370    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:17.581890    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:18.080415    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:18.580152    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:19.082146    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:19.582364    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:20.081311    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:20.580294    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:21.082314    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:21.583916    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:22.082056    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:22.584000    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:23.081795    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:23.586502    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:24.081073    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:24.580428    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:25.080163    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:25.584082    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:26.080741    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:26.588178    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:27.081754    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:27.580750    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:28.079678    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:28.581589    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:29.080720    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:29.581292    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:30.081443    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:30.586195    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:31.082252    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:31.581401    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:32.081267    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:32.585890    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:33.082204    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:33.581731    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:34.082081    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:34.580622    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:35.080743    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:35.580784    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:36.081638    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:36.584617    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:37.083443    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:37.581561    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:38.079687    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:38.585465    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:39.083543    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:39.585007    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:40.081458    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:40.580076    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:41.080138    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:41.581585    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:42.080963    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:42.583859    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:43.081374    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:43.581270    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:44.081369    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:44.580737    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:45.081441    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:45.580099    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:46.080549    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:46.581442    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:47.081347    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:47.583851    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:48.080449    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:48.580654    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:49.079304    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:49.581387    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:50.083775    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:50.581874    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:51.080714    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:51.580310    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:52.083345    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:52.580574    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:53.081520    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:53.580267    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:54.083769    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:54.581673    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:55.080670    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:55.581150    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:56.079564    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:56.579381    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:57.080226    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:57.579538    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:58.079622    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:58.579328    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:59.079215    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:31:59.584619    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:00.085270    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:00.589379    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:01.080252    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:01.586083    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:02.084590    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:02.583732    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:03.080658    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:03.584523    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:04.082991    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:04.581102    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:05.079689    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:05.579771    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:06.079623    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:06.580877    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:07.080538    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:07.583464    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:08.078035    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:08.580752    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:09.083024    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:09.582859    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:10.079181    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:10.579940    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:11.078978    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:11.579685    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:12.080689    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:12.586143    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:13.081847    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:13.587449    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:14.081194    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:14.581126    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:15.080466    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:15.583286    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:16.080767    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:16.578682    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:17.079996    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:17.581296    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:18.079015    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:18.579165    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:19.078858    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:19.579183    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:20.078906    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:20.579801    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:21.079101    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:21.578957    2772 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:22.079090    2772 kapi.go:107] duration metric: took 2m30.003937584s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 11:32:22.083599    2772 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-439000 cluster.
	I0906 11:32:22.087505    2772 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 11:32:22.090423    2772 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 11:32:22.095480    2772 out.go:177] * Enabled addons: storage-provisioner-rancher, volcano, nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, default-storageclass, inspektor-gadget, metrics-server, volumesnapshots, yakd, registry, csi-hostpath-driver, ingress, gcp-auth
	I0906 11:32:22.098356    2772 addons.go:510] duration metric: took 2m39.376021625s for enable addons: enabled=[storage-provisioner-rancher volcano nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner default-storageclass inspektor-gadget metrics-server volumesnapshots yakd registry csi-hostpath-driver ingress gcp-auth]
	I0906 11:32:22.098370    2772 start.go:246] waiting for cluster config update ...
	I0906 11:32:22.098380    2772 start.go:255] writing updated cluster config ...
	I0906 11:32:22.098761    2772 ssh_runner.go:195] Run: rm -f paused
	I0906 11:32:22.250278    2772 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0906 11:32:22.253561    2772 out.go:201] 
	W0906 11:32:22.256453    2772 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0906 11:32:22.260418    2772 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0906 11:32:22.267483    2772 out.go:177] * Done! kubectl is now configured to use "addons-439000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 06 18:42:13 addons-439000 dockerd[1272]: time="2024-09-06T18:42:13.252099619Z" level=info msg="ignoring event" container=bcf81a9ab32692767b913cf3a0771466b1bf133d660f548d04b3f3902c53e8cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.252514771Z" level=info msg="shim disconnected" id=bcf81a9ab32692767b913cf3a0771466b1bf133d660f548d04b3f3902c53e8cd namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.252716948Z" level=warning msg="cleaning up after shim disconnected" id=bcf81a9ab32692767b913cf3a0771466b1bf133d660f548d04b3f3902c53e8cd namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.252738587Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:42:13 addons-439000 cri-dockerd[1170]: time="2024-09-06T18:42:13Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.322998122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.323023431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.323111073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.323142969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:42:13 addons-439000 dockerd[1272]: time="2024-09-06T18:42:13.402044607Z" level=info msg="ignoring event" container=9b889522e6504bd2e6f9e970de0db6f23b903b906f8f6f95746d8504bdb93e76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.412140014Z" level=info msg="shim disconnected" id=9b889522e6504bd2e6f9e970de0db6f23b903b906f8f6f95746d8504bdb93e76 namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.412469818Z" level=warning msg="cleaning up after shim disconnected" id=9b889522e6504bd2e6f9e970de0db6f23b903b906f8f6f95746d8504bdb93e76 namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.412476239Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1272]: time="2024-09-06T18:42:13.445100604Z" level=info msg="ignoring event" container=a3d1bd8fd9281567525131da496ec09b361efd5253c09c0e892ed69edf28e12c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.445411603Z" level=info msg="shim disconnected" id=a3d1bd8fd9281567525131da496ec09b361efd5253c09c0e892ed69edf28e12c namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.446155725Z" level=warning msg="cleaning up after shim disconnected" id=a3d1bd8fd9281567525131da496ec09b361efd5253c09c0e892ed69edf28e12c namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.446190082Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1272]: time="2024-09-06T18:42:13.499317261Z" level=info msg="ignoring event" container=da6fb1dc798ed11902b157fa9d0675def2bf45af3ccc98e39dcbeccc4f6e2d3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.499944263Z" level=info msg="shim disconnected" id=da6fb1dc798ed11902b157fa9d0675def2bf45af3ccc98e39dcbeccc4f6e2d3a namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.499974158Z" level=warning msg="cleaning up after shim disconnected" id=da6fb1dc798ed11902b157fa9d0675def2bf45af3ccc98e39dcbeccc4f6e2d3a namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.499978369Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1272]: time="2024-09-06T18:42:13.563688531Z" level=info msg="ignoring event" container=6534fbc3808d0677b8912b490e2298a254ab5b53b4fc38b5d080d8f8fff23a8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.563890708Z" level=info msg="shim disconnected" id=6534fbc3808d0677b8912b490e2298a254ab5b53b4fc38b5d080d8f8fff23a8e namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.563961714Z" level=warning msg="cleaning up after shim disconnected" id=6534fbc3808d0677b8912b490e2298a254ab5b53b4fc38b5d080d8f8fff23a8e namespace=moby
	Sep 06 18:42:13 addons-439000 dockerd[1280]: time="2024-09-06T18:42:13.563983478Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                       ATTEMPT             POD ID              POD
	6f0fefa6acba3       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  Less than a second ago   Running             hello-world-app            0                   d5aacfc9c4120       hello-world-app-55bf9c44b4-8s4x8
	822e40d1da047       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                                                8 seconds ago            Running             nginx                      0                   522b2355bf2bb       nginx
	7b41fe3dca6ec       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago            Running             gcp-auth                   0                   4c739c83f3daa       gcp-auth-89d5ffd79-l728g
	56d19eb42913f       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago           Running             controller                 0                   d010fd8b2db38       ingress-nginx-controller-bc57996ff-6lpvd
	23f4345d3b7be       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        11 minutes ago           Running             yakd                       0                   696b065b24705       yakd-dashboard-67d98fc6b-j6tdc
	e1fc439644a33       420193b27261a                                                                                                                11 minutes ago           Exited              patch                      1                   a5bd65338c697       ingress-nginx-admission-patch-84p2h
	fce4f361bd25f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago           Exited              create                     0                   f67c9594d2307       ingress-nginx-admission-create-5qcjv
	a3d1bd8fd9281       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago           Exited              registry-proxy             0                   6534fbc3808d0       registry-proxy-77sc6
	9b889522e6504       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                             12 minutes ago           Exited              registry                   0                   da6fb1dc798ed       registry-6fb4cdfc84-68fq8
	cc870e95e5561       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago           Running             cloud-spanner-emulator     0                   95b5f4e4f5338       cloud-spanner-emulator-769b77f747-zllxf
	9def007576dca       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago           Running             nvidia-device-plugin-ctr   0                   ae6b432bcfb31       nvidia-device-plugin-daemonset-nlzkn
	db36b37300429       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago           Running             local-path-provisioner     0                   238de4ca7908e       local-path-provisioner-86d989889c-q47d2
	af4215b2a1ce2       ba04bb24b9575                                                                                                                12 minutes ago           Running             storage-provisioner        0                   0b9c225b63556       storage-provisioner
	46a0c5be8ad7b       2437cf7621777                                                                                                                12 minutes ago           Running             coredns                    0                   929fa19600166       coredns-6f6b679f8f-gn6kg
	ec7b31e89499f       71d55d66fd4ee                                                                                                                12 minutes ago           Running             kube-proxy                 0                   3129eef6327d0       kube-proxy-bp9v8
	9f2e7ff609987       cd0f0ae0ec9e0                                                                                                                12 minutes ago           Running             kube-apiserver             0                   c7ce5030e63db       kube-apiserver-addons-439000
	72689c42ca882       fbbbd428abb4d                                                                                                                12 minutes ago           Running             kube-scheduler             0                   c6c30cd2bea29       kube-scheduler-addons-439000
	621c14c5591e5       fcb0683e6bdbd                                                                                                                12 minutes ago           Running             kube-controller-manager    0                   769aa5e8b145d       kube-controller-manager-addons-439000
	916ca142da774       27e3830e14027                                                                                                                12 minutes ago           Running             etcd                       0                   7fd85fe37fdee       etcd-addons-439000
	
	
	==> controller_ingress [56d19eb42913] <==
	10.244.0.1 - - [06/Sep/2024:18:42:11 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.5.0" 80 0.000 [default-nginx-80] [] 10.244.0.30:80 615 0.000 200 ebc2498325af9b530bfabc2a5f96c87b
	I0906 18:42:01.887633       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-6lpvd", UID:"1a637420-1039-44af-a0c0-7920acb99f6e", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0906 18:42:05.205668       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0906 18:42:05.205738       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0906 18:42:05.221332       7 controller.go:213] "Backend successfully reloaded"
	I0906 18:42:05.221687       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-6lpvd", UID:"1a637420-1039-44af-a0c0-7920acb99f6e", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0906 18:42:11.177416       7 controller.go:1110] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0906 18:42:11.189460       7 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.012s renderingIngressLength:2 renderingIngressTime:0s admissionTime:0.012s testedConfigurationSize:26.2kB}
	I0906 18:42:11.189513       7 main.go:107] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
	I0906 18:42:11.193152       7 store.go:440] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
	I0906 18:42:11.193993       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"7d0b1fbf-3481-4528-9cbe-d5ed3c81c495", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2766", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0906 18:42:11.872070       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0906 18:42:11.887236       7 controller.go:213] "Backend successfully reloaded"
	I0906 18:42:11.887549       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-6lpvd", UID:"1a637420-1039-44af-a0c0-7920acb99f6e", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0906 18:42:12.066837       7 sigterm.go:36] "Received SIGTERM, shutting down"
	I0906 18:42:12.066859       7 nginx.go:393] "Shutting down controller queues"
	E0906 18:42:12.067669       7 status.go:120] "error obtaining running IP address" err="pods is forbidden: User \"system:serviceaccount:ingress-nginx:ingress-nginx\" cannot list resource \"pods\" in API group \"\" in the namespace \"ingress-nginx\""
	I0906 18:42:12.067678       7 nginx.go:401] "Stopping admission controller"
	E0906 18:42:12.067738       7 nginx.go:340] "Error listening for TLS connections" err="http: Server closed"
	I0906 18:42:12.067815       7 nginx.go:409] "Stopping NGINX process"
	2024/09/06 18:42:12 [notice] 314#314: signal process started
	I0906 18:42:13.082907       7 nginx.go:422] "NGINX process has stopped"
	I0906 18:42:13.082921       7 sigterm.go:44] Handled quit, delaying controller exit for 10 seconds
	E0906 18:42:13.743128       7 leaderelection.go:340] Failed to update lock optimitically: leases.coordination.k8s.io "ingress-nginx-leader" is forbidden: User "system:serviceaccount:ingress-nginx:ingress-nginx" cannot update resource "leases" in API group "coordination.k8s.io" in the namespace "ingress-nginx", falling back to slow path
	E0906 18:42:13.743439       7 leaderelection.go:347] error retrieving resource lock ingress-nginx/ingress-nginx-leader: leases.coordination.k8s.io "ingress-nginx-leader" is forbidden: User "system:serviceaccount:ingress-nginx:ingress-nginx" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "ingress-nginx"
	
	
	==> coredns [46a0c5be8ad7] <==
	[INFO] 10.244.0.20:46246 - 61909 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026476s
	[INFO] 10.244.0.20:46246 - 7398 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050909s
	[INFO] 10.244.0.20:46246 - 46505 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038651s
	[INFO] 10.244.0.20:46246 - 62291 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00071506s
	[INFO] 10.244.0.20:50381 - 11213 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065168s
	[INFO] 10.244.0.20:50381 - 54186 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015802s
	[INFO] 10.244.0.20:50381 - 57484 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012342s
	[INFO] 10.244.0.20:50381 - 53494 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010924s
	[INFO] 10.244.0.20:50381 - 57774 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010549s
	[INFO] 10.244.0.20:50381 - 50854 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00001034s
	[INFO] 10.244.0.20:50381 - 46369 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014302s
	[INFO] 10.244.0.20:38605 - 22257 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000030812s
	[INFO] 10.244.0.20:35228 - 20248 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000217313s
	[INFO] 10.244.0.20:38605 - 54033 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000016969s
	[INFO] 10.244.0.20:38605 - 2980 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013551s
	[INFO] 10.244.0.20:38605 - 57790 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015218s
	[INFO] 10.244.0.20:38605 - 59330 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00001205s
	[INFO] 10.244.0.20:38605 - 57557 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011633s
	[INFO] 10.244.0.20:35228 - 33639 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013384s
	[INFO] 10.244.0.20:38605 - 24825 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000013384s
	[INFO] 10.244.0.20:35228 - 11378 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001205s
	[INFO] 10.244.0.20:35228 - 32472 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027644s
	[INFO] 10.244.0.20:35228 - 4788 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032981s
	[INFO] 10.244.0.20:35228 - 6406 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010757s
	[INFO] 10.244.0.20:35228 - 12295 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014968s
	
	
	==> describe nodes <==
	Name:               addons-439000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-439000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=addons-439000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T11_29_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-439000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:29:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-439000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:42:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:42:13 +0000   Fri, 06 Sep 2024 18:29:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:42:13 +0000   Fri, 06 Sep 2024 18:29:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:42:13 +0000   Fri, 06 Sep 2024 18:29:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:42:13 +0000   Fri, 06 Sep 2024 18:29:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-439000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 8099a51d024445c4bce685bd8392b5c5
	  System UUID:                8099a51d024445c4bce685bd8392b5c5
	  Boot ID:                    0c22b881-5074-4387-988f-e903026460d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-zllxf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-8s4x8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  gcp-auth                    gcp-auth-89d5ffd79-l728g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-gn6kg                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-439000                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-439000               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-439000      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-bp9v8                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-439000               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-nlzkn       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-q47d2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-j6tdc             0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             298Mi (7%)  426Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-439000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-439000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-439000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-439000 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-439000 event: Registered Node addons-439000 in Controller
	
	
	==> dmesg <==
	[Sep 6 18:30] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.942204] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.187260] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.506623] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.622737] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.060154] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.959657] kauditd_printk_skb: 10 callbacks suppressed
	[ +13.368887] kauditd_printk_skb: 22 callbacks suppressed
	[Sep 6 18:31] kauditd_printk_skb: 18 callbacks suppressed
	[ +45.182987] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 6 18:32] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.465408] kauditd_printk_skb: 2 callbacks suppressed
	[ +21.358309] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.320255] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 6 18:33] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.892022] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 6 18:36] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 6 18:41] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.288857] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.467119] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.733611] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.525716] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.450372] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.269337] kauditd_printk_skb: 4 callbacks suppressed
	[Sep 6 18:42] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [916ca142da77] <==
	{"level":"info","ts":"2024-09-06T18:29:50.559367Z","caller":"traceutil/trace.go:171","msg":"trace[2051060038] transaction","detail":"{read_only:false; response_revision:881; number_of_response:1; }","duration":"247.695269ms","start":"2024-09-06T18:29:50.311667Z","end":"2024-09-06T18:29:50.559363Z","steps":["trace[2051060038] 'process raft request'  (duration: 239.389539ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:29:50.559561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.632985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-06T18:29:50.559572Z","caller":"traceutil/trace.go:171","msg":"trace[299897926] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:881; }","duration":"113.663714ms","start":"2024-09-06T18:29:50.445905Z","end":"2024-09-06T18:29:50.559568Z","steps":["trace[299897926] 'agreement among raft nodes before linearized reading'  (duration: 113.602418ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:29:51.365361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.392143ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:29:51.365676Z","caller":"traceutil/trace.go:171","msg":"trace[990625719] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:887; }","duration":"163.481558ms","start":"2024-09-06T18:29:51.201911Z","end":"2024-09-06T18:29:51.365392Z","steps":["trace[990625719] 'range keys from in-memory index tree'  (duration: 163.386095ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:29:57.655016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.564889ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:29:57.655060Z","caller":"traceutil/trace.go:171","msg":"trace[2044823068] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:959; }","duration":"163.62002ms","start":"2024-09-06T18:29:57.491432Z","end":"2024-09-06T18:29:57.655052Z","steps":["trace[2044823068] 'range keys from in-memory index tree'  (duration: 163.537882ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:09.499654Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.810543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:30:09.499701Z","caller":"traceutil/trace.go:171","msg":"trace[1005125793] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1006; }","duration":"155.886127ms","start":"2024-09-06T18:30:09.343807Z","end":"2024-09-06T18:30:09.499693Z","steps":["trace[1005125793] 'range keys from in-memory index tree'  (duration: 155.763397ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:09.499839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.419796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:30:09.499855Z","caller":"traceutil/trace.go:171","msg":"trace[42077094] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1006; }","duration":"160.436094ms","start":"2024-09-06T18:30:09.339415Z","end":"2024-09-06T18:30:09.499851Z","steps":["trace[42077094] 'range keys from in-memory index tree'  (duration: 160.377639ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:13.764584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.990805ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-439000\" ","response":"range_response_count:1 size:5172"}
	{"level":"info","ts":"2024-09-06T18:30:13.764616Z","caller":"traceutil/trace.go:171","msg":"trace[1675539025] range","detail":"{range_begin:/registry/minions/addons-439000; range_end:; response_count:1; response_revision:1019; }","duration":"130.028624ms","start":"2024-09-06T18:30:13.634579Z","end":"2024-09-06T18:30:13.764608Z","steps":["trace[1675539025] 'range keys from in-memory index tree'  (duration: 129.921992ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:13.764624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.940205ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:30:13.764638Z","caller":"traceutil/trace.go:171","msg":"trace[650812845] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1019; }","duration":"108.958719ms","start":"2024-09-06T18:30:13.655676Z","end":"2024-09-06T18:30:13.764634Z","steps":["trace[650812845] 'range keys from in-memory index tree'  (duration: 108.915533ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:21.208358Z","caller":"traceutil/trace.go:171","msg":"trace[2086298480] transaction","detail":"{read_only:false; response_revision:1067; number_of_response:1; }","duration":"109.574519ms","start":"2024-09-06T18:30:21.098776Z","end":"2024-09-06T18:30:21.208350Z","steps":["trace[2086298480] 'process raft request'  (duration: 109.500512ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:54.397317Z","caller":"traceutil/trace.go:171","msg":"trace[1847226588] linearizableReadLoop","detail":"{readStateIndex:1240; appliedIndex:1239; }","duration":"248.552965ms","start":"2024-09-06T18:30:54.148751Z","end":"2024-09-06T18:30:54.397304Z","steps":["trace[1847226588] 'read index received'  (duration: 248.479939ms)","trace[1847226588] 'applied index is now lower than readState.Index'  (duration: 72.734µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-06T18:30:54.397399Z","caller":"traceutil/trace.go:171","msg":"trace[618388873] transaction","detail":"{read_only:false; response_revision:1207; number_of_response:1; }","duration":"263.711399ms","start":"2024-09-06T18:30:54.133684Z","end":"2024-09-06T18:30:54.397396Z","steps":["trace[618388873] 'process raft request'  (duration: 263.569597ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:54.397476Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.708098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:30:54.397486Z","caller":"traceutil/trace.go:171","msg":"trace[483686979] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1207; }","duration":"248.733467ms","start":"2024-09-06T18:30:54.148750Z","end":"2024-09-06T18:30:54.397483Z","steps":["trace[483686979] 'agreement among raft nodes before linearized reading'  (duration: 248.700058ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:54.397526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.095867ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:30:54.397532Z","caller":"traceutil/trace.go:171","msg":"trace[363232837] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1207; }","duration":"195.10299ms","start":"2024-09-06T18:30:54.202427Z","end":"2024-09-06T18:30:54.397530Z","steps":["trace[363232837] 'agreement among raft nodes before linearized reading'  (duration: 195.092659ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:39:35.015841Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1845}
	{"level":"info","ts":"2024-09-06T18:39:35.123462Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1845,"took":"104.636055ms","hash":2837130340,"current-db-size-bytes":8912896,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4820992,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-06T18:39:35.123495Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2837130340,"revision":1845,"compact-revision":-1}
	
	
	==> gcp-auth [7b41fe3dca6e] <==
	2024/09/06 18:32:21 GCP Auth Webhook started!
	2024/09/06 18:32:37 Ready to marshal response ...
	2024/09/06 18:32:37 Ready to write response ...
	2024/09/06 18:32:38 Ready to marshal response ...
	2024/09/06 18:32:38 Ready to write response ...
	2024/09/06 18:33:01 Ready to marshal response ...
	2024/09/06 18:33:01 Ready to write response ...
	2024/09/06 18:33:01 Ready to marshal response ...
	2024/09/06 18:33:01 Ready to write response ...
	2024/09/06 18:33:02 Ready to marshal response ...
	2024/09/06 18:33:02 Ready to write response ...
	2024/09/06 18:41:13 Ready to marshal response ...
	2024/09/06 18:41:13 Ready to write response ...
	2024/09/06 18:41:14 Ready to marshal response ...
	2024/09/06 18:41:14 Ready to write response ...
	2024/09/06 18:41:30 Ready to marshal response ...
	2024/09/06 18:41:30 Ready to write response ...
	2024/09/06 18:42:01 Ready to marshal response ...
	2024/09/06 18:42:01 Ready to write response ...
	2024/09/06 18:42:11 Ready to marshal response ...
	2024/09/06 18:42:11 Ready to write response ...
	
	
	==> kernel <==
	 18:42:13 up 12 min,  0 users,  load average: 0.58, 0.66, 0.46
	Linux addons-439000 5.10.207 #1 SMP PREEMPT Tue Sep 3 18:23:52 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9f2e7ff60998] <==
	W0906 18:32:53.329682       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0906 18:32:53.360995       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0906 18:32:53.395892       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0906 18:32:53.411070       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0906 18:32:53.413506       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0906 18:32:53.467860       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0906 18:41:23.251324       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0906 18:41:45.747056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:41:45.747073       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:41:45.760610       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:41:45.760633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:41:45.763671       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:41:45.763683       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:41:45.771040       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:41:45.771056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:41:45.788994       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:41:45.789012       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0906 18:41:46.764869       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0906 18:41:46.789666       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0906 18:41:46.880024       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0906 18:41:56.521105       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0906 18:41:57.532757       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0906 18:42:01.869558       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0906 18:42:01.970124       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.124.224"}
	I0906 18:42:11.244665       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.176.93"}
	
	
	==> kube-controller-manager [621c14c5591e] <==
	E0906 18:42:04.785140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:05.575728       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:05.575868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:42:06.614824       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0906 18:42:07.305862       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:07.305993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:42:11.186724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.413402ms"
	I0906 18:42:11.195440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="8.60796ms"
	I0906 18:42:11.195541       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.051µs"
	I0906 18:42:11.197164       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="24.975µs"
	W0906 18:42:11.618982       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:11.619006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:42:12.033122       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0906 18:42:12.034247       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="1.793µs"
	I0906 18:42:12.035958       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0906 18:42:12.130167       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:12.130192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:42:12.849309       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0906 18:42:12.849331       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 18:42:13.007919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-439000"
	I0906 18:42:13.200808       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0906 18:42:13.200829       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 18:42:13.383307       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="2.835µs"
	I0906 18:42:13.872232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.645476ms"
	I0906 18:42:13.872259       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.509µs"
	
	
	==> kube-proxy [ec7b31e89499] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:29:44.162335       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:29:44.171165       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0906 18:29:44.171198       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:29:44.189506       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:29:44.189530       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:29:44.189544       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:29:44.190387       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:29:44.190482       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:29:44.190488       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:29:44.191210       1 config.go:197] "Starting service config controller"
	I0906 18:29:44.191229       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:29:44.191242       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:29:44.191244       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:29:44.191802       1 config.go:326] "Starting node config controller"
	I0906 18:29:44.191805       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:29:44.295683       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:29:44.295717       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:29:44.295729       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [72689c42ca88] <==
	W0906 18:29:35.570241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 18:29:35.570328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:35.570383       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 18:29:35.570451       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:35.570412       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 18:29:35.570482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:35.570429       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0906 18:29:35.570598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 18:29:35.570608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0906 18:29:35.570668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:35.570735       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 18:29:35.570745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:35.570810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 18:29:35.570852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:35.570909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 18:29:35.570918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:35.570952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 18:29:35.570960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:36.420149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 18:29:36.420259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:36.431653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 18:29:36.431869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:36.618491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 18:29:36.618596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0906 18:29:37.178710       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 18:42:11 addons-439000 kubelet[2040]: I0906 18:42:11.838120    2040 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e1e95163a4f04a2181abeafca53263990e9ea9b3b98ea30cbf8672b9d2ca3bb6"} err="failed to get container status \"e1e95163a4f04a2181abeafca53263990e9ea9b3b98ea30cbf8672b9d2ca3bb6\": rpc error: code = Unknown desc = Error response from daemon: No such container: e1e95163a4f04a2181abeafca53263990e9ea9b3b98ea30cbf8672b9d2ca3bb6"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.439520    2040 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7zbm\" (UniqueName: \"kubernetes.io/projected/8e2859de-dc98-408b-9fa3-d499f1467acc-kube-api-access-l7zbm\") pod \"8e2859de-dc98-408b-9fa3-d499f1467acc\" (UID: \"8e2859de-dc98-408b-9fa3-d499f1467acc\") "
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.439731    2040 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8e2859de-dc98-408b-9fa3-d499f1467acc-gcp-creds\") pod \"8e2859de-dc98-408b-9fa3-d499f1467acc\" (UID: \"8e2859de-dc98-408b-9fa3-d499f1467acc\") "
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.439762    2040 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e2859de-dc98-408b-9fa3-d499f1467acc-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "8e2859de-dc98-408b-9fa3-d499f1467acc" (UID: "8e2859de-dc98-408b-9fa3-d499f1467acc"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.443473    2040 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e2859de-dc98-408b-9fa3-d499f1467acc-kube-api-access-l7zbm" (OuterVolumeSpecName: "kube-api-access-l7zbm") pod "8e2859de-dc98-408b-9fa3-d499f1467acc" (UID: "8e2859de-dc98-408b-9fa3-d499f1467acc"). InnerVolumeSpecName "kube-api-access-l7zbm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.540173    2040 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l7zbm\" (UniqueName: \"kubernetes.io/projected/8e2859de-dc98-408b-9fa3-d499f1467acc-kube-api-access-l7zbm\") on node \"addons-439000\" DevicePath \"\""
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.540193    2040 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8e2859de-dc98-408b-9fa3-d499f1467acc-gcp-creds\") on node \"addons-439000\" DevicePath \"\""
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.640745    2040 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b82x5\" (UniqueName: \"kubernetes.io/projected/cbfa4ae6-52b4-4753-8931-dc75977f2b98-kube-api-access-b82x5\") pod \"cbfa4ae6-52b4-4753-8931-dc75977f2b98\" (UID: \"cbfa4ae6-52b4-4753-8931-dc75977f2b98\") "
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.641573    2040 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbfa4ae6-52b4-4753-8931-dc75977f2b98-kube-api-access-b82x5" (OuterVolumeSpecName: "kube-api-access-b82x5") pod "cbfa4ae6-52b4-4753-8931-dc75977f2b98" (UID: "cbfa4ae6-52b4-4753-8931-dc75977f2b98"). InnerVolumeSpecName "kube-api-access-b82x5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.741531    2040 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slvc9\" (UniqueName: \"kubernetes.io/projected/6010aca8-2072-44fb-abeb-395ddabbb03a-kube-api-access-slvc9\") pod \"6010aca8-2072-44fb-abeb-395ddabbb03a\" (UID: \"6010aca8-2072-44fb-abeb-395ddabbb03a\") "
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.741560    2040 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-b82x5\" (UniqueName: \"kubernetes.io/projected/cbfa4ae6-52b4-4753-8931-dc75977f2b98-kube-api-access-b82x5\") on node \"addons-439000\" DevicePath \"\""
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.742069    2040 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6010aca8-2072-44fb-abeb-395ddabbb03a-kube-api-access-slvc9" (OuterVolumeSpecName: "kube-api-access-slvc9") pod "6010aca8-2072-44fb-abeb-395ddabbb03a" (UID: "6010aca8-2072-44fb-abeb-395ddabbb03a"). InnerVolumeSpecName "kube-api-access-slvc9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.776623    2040 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33539311-1fe9-4e25-a9f6-99843707bfe8" path="/var/lib/kubelet/pods/33539311-1fe9-4e25-a9f6-99843707bfe8/volumes"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.776775    2040 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3547e945-d2b0-4c57-a697-e190f8a4c3d1" path="/var/lib/kubelet/pods/3547e945-d2b0-4c57-a697-e190f8a4c3d1/volumes"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.777501    2040 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a65a8678-86f6-44c3-8b6a-dc45366b7642" path="/var/lib/kubelet/pods/a65a8678-86f6-44c3-8b6a-dc45366b7642/volumes"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.841635    2040 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-slvc9\" (UniqueName: \"kubernetes.io/projected/6010aca8-2072-44fb-abeb-395ddabbb03a-kube-api-access-slvc9\") on node \"addons-439000\" DevicePath \"\""
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.843983    2040 scope.go:117] "RemoveContainer" containerID="9b889522e6504bd2e6f9e970de0db6f23b903b906f8f6f95746d8504bdb93e76"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.868524    2040 scope.go:117] "RemoveContainer" containerID="9b889522e6504bd2e6f9e970de0db6f23b903b906f8f6f95746d8504bdb93e76"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: E0906 18:42:13.869015    2040 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 9b889522e6504bd2e6f9e970de0db6f23b903b906f8f6f95746d8504bdb93e76" containerID="9b889522e6504bd2e6f9e970de0db6f23b903b906f8f6f95746d8504bdb93e76"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.869054    2040 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"9b889522e6504bd2e6f9e970de0db6f23b903b906f8f6f95746d8504bdb93e76"} err="failed to get container status \"9b889522e6504bd2e6f9e970de0db6f23b903b906f8f6f95746d8504bdb93e76\": rpc error: code = Unknown desc = Error response from daemon: No such container: 9b889522e6504bd2e6f9e970de0db6f23b903b906f8f6f95746d8504bdb93e76"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.869068    2040 scope.go:117] "RemoveContainer" containerID="a3d1bd8fd9281567525131da496ec09b361efd5253c09c0e892ed69edf28e12c"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.876889    2040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-8s4x8" podStartSLOduration=1.210784846 podStartE2EDuration="2.876876719s" podCreationTimestamp="2024-09-06 18:42:11 +0000 UTC" firstStartedPulling="2024-09-06 18:42:11.598588729 +0000 UTC m=+753.877167721" lastFinishedPulling="2024-09-06 18:42:13.264680602 +0000 UTC m=+755.543259594" observedRunningTime="2024-09-06 18:42:13.867732239 +0000 UTC m=+756.146311231" watchObservedRunningTime="2024-09-06 18:42:13.876876719 +0000 UTC m=+756.155455711"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.879531    2040 scope.go:117] "RemoveContainer" containerID="a3d1bd8fd9281567525131da496ec09b361efd5253c09c0e892ed69edf28e12c"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: E0906 18:42:13.879951    2040 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: a3d1bd8fd9281567525131da496ec09b361efd5253c09c0e892ed69edf28e12c" containerID="a3d1bd8fd9281567525131da496ec09b361efd5253c09c0e892ed69edf28e12c"
	Sep 06 18:42:13 addons-439000 kubelet[2040]: I0906 18:42:13.879986    2040 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a3d1bd8fd9281567525131da496ec09b361efd5253c09c0e892ed69edf28e12c"} err="failed to get container status \"a3d1bd8fd9281567525131da496ec09b361efd5253c09c0e892ed69edf28e12c\": rpc error: code = Unknown desc = Error response from daemon: No such container: a3d1bd8fd9281567525131da496ec09b361efd5253c09c0e892ed69edf28e12c"
	
	
	==> storage-provisioner [af4215b2a1ce] <==
	I0906 18:29:46.388817       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:29:46.405741       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:29:46.405760       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 18:29:46.409281       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 18:29:46.409351       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-439000_b257f6f8-8585-47fb-acf2-cece86752500!
	I0906 18:29:46.409700       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e7fbe98-990a-49d8-b1f7-6bb2961f6ab6", APIVersion:"v1", ResourceVersion:"596", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-439000_b257f6f8-8585-47fb-acf2-cece86752500 became leader
	I0906 18:29:46.509451       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-439000_b257f6f8-8585-47fb-acf2-cece86752500!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-439000 -n addons-439000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-439000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-439000 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-439000 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-439000/192.168.105.2
	Start Time:       Fri, 06 Sep 2024 11:33:01 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qmr72 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qmr72:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m13s                  default-scheduler  Successfully assigned default/busybox to addons-439000
	  Normal   Pulling    7m52s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m52s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m52s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m25s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m2s (x21 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.30s)

                                                
                                    
x
+
TestCertOptions (10.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-054000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-054000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.976277958s)

                                                
                                                
-- stdout --
	* [cert-options-054000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-054000" primary control-plane node in "cert-options-054000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-054000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-054000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-054000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-054000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-054000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.436791ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-054000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-054000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-054000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-054000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-054000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-054000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.213917ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-054000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-054000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-054000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-054000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-054000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-06 12:34:23.040267 -0700 PDT m=+3947.663358917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-054000 -n cert-options-054000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-054000 -n cert-options-054000: exit status 7 (29.798292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-054000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-054000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-054000
--- FAIL: TestCertOptions (10.24s)

                                                
                                    
x
+
TestCertExpiration (197.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-051000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-051000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (11.888618833s)

                                                
                                                
-- stdout --
	* [cert-expiration-051000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-051000" primary control-plane node in "cert-expiration-051000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-051000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-051000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-051000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-051000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-051000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.241638542s)

                                                
                                                
-- stdout --
	* [cert-expiration-051000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-051000" primary control-plane node in "cert-expiration-051000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-051000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-051000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-051000" primary control-plane node in "cert-expiration-051000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-06 12:37:15.570285 -0700 PDT m=+4120.194621959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-051000 -n cert-expiration-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-051000 -n cert-expiration-051000: exit status 7 (68.052375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-051000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-051000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-051000
--- FAIL: TestCertExpiration (197.29s)

                                                
                                    
x
+
TestDockerFlags (12.2s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-116000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-116000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.963083s)

                                                
                                                
-- stdout --
	* [docker-flags-116000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-116000" primary control-plane node in "docker-flags-116000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-116000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:34:00.740660    7147 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:34:00.740808    7147 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:34:00.740814    7147 out.go:358] Setting ErrFile to fd 2...
	I0906 12:34:00.740817    7147 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:34:00.740989    7147 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:34:00.742380    7147 out.go:352] Setting JSON to false
	I0906 12:34:00.760300    7147 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5610,"bootTime":1725645630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:34:00.760373    7147 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:34:00.767843    7147 out.go:177] * [docker-flags-116000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:34:00.775019    7147 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:34:00.775048    7147 notify.go:220] Checking for updates...
	I0906 12:34:00.782941    7147 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:34:00.785988    7147 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:34:00.788900    7147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:34:00.791980    7147 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:34:00.794985    7147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:34:00.796432    7147 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:34:00.796501    7147 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:34:00.796548    7147 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:34:00.801014    7147 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:34:00.807842    7147 start.go:297] selected driver: qemu2
	I0906 12:34:00.807850    7147 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:34:00.807856    7147 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:34:00.810110    7147 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:34:00.812932    7147 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:34:00.816035    7147 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0906 12:34:00.816068    7147 cni.go:84] Creating CNI manager for ""
	I0906 12:34:00.816075    7147 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:34:00.816079    7147 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:34:00.816112    7147 start.go:340] cluster config:
	{Name:docker-flags-116000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-116000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket
_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:34:00.819490    7147 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:34:00.822981    7147 out.go:177] * Starting "docker-flags-116000" primary control-plane node in "docker-flags-116000" cluster
	I0906 12:34:00.830981    7147 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:34:00.830993    7147 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:34:00.831001    7147 cache.go:56] Caching tarball of preloaded images
	I0906 12:34:00.831052    7147 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:34:00.831057    7147 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:34:00.831112    7147 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/docker-flags-116000/config.json ...
	I0906 12:34:00.831123    7147 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/docker-flags-116000/config.json: {Name:mk44a46355768cff564fb9158491db46c1d2c16c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:34:00.831389    7147 start.go:360] acquireMachinesLock for docker-flags-116000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:34:02.867598    7147 start.go:364] duration metric: took 2.036166416s to acquireMachinesLock for "docker-flags-116000"
	I0906 12:34:02.867746    7147 start.go:93] Provisioning new machine with config: &{Name:docker-flags-116000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:docker-flags-116000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:34:02.867998    7147 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:34:02.872552    7147 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:34:02.923769    7147 start.go:159] libmachine.API.Create for "docker-flags-116000" (driver="qemu2")
	I0906 12:34:02.923816    7147 client.go:168] LocalClient.Create starting
	I0906 12:34:02.923953    7147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:34:02.924014    7147 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:02.924032    7147 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:02.924108    7147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:34:02.924162    7147 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:02.924178    7147 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:02.924990    7147 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:34:03.100180    7147 main.go:141] libmachine: Creating SSH key...
	I0906 12:34:03.203320    7147 main.go:141] libmachine: Creating Disk image...
	I0906 12:34:03.203326    7147 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:34:03.203508    7147 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2
	I0906 12:34:03.212913    7147 main.go:141] libmachine: STDOUT: 
	I0906 12:34:03.212937    7147 main.go:141] libmachine: STDERR: 
	I0906 12:34:03.212993    7147 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2 +20000M
	I0906 12:34:03.220833    7147 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:34:03.220849    7147 main.go:141] libmachine: STDERR: 
	I0906 12:34:03.220868    7147 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2
	I0906 12:34:03.220873    7147 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:34:03.220886    7147 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:34:03.220914    7147 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:67:00:72:c9:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2
	I0906 12:34:03.222567    7147 main.go:141] libmachine: STDOUT: 
	I0906 12:34:03.222585    7147 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:34:03.222610    7147 client.go:171] duration metric: took 298.788917ms to LocalClient.Create
	I0906 12:34:05.224799    7147 start.go:128] duration metric: took 2.356785625s to createHost
	I0906 12:34:05.224852    7147 start.go:83] releasing machines lock for "docker-flags-116000", held for 2.357200167s
	W0906 12:34:05.224896    7147 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:05.245865    7147 out.go:177] * Deleting "docker-flags-116000" in qemu2 ...
	W0906 12:34:05.277138    7147 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:05.277158    7147 start.go:729] Will try again in 5 seconds ...
	I0906 12:34:10.277488    7147 start.go:360] acquireMachinesLock for docker-flags-116000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:34:10.286133    7147 start.go:364] duration metric: took 8.52775ms to acquireMachinesLock for "docker-flags-116000"
	I0906 12:34:10.286309    7147 start.go:93] Provisioning new machine with config: &{Name:docker-flags-116000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:docker-flags-116000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:34:10.286576    7147 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:34:10.295878    7147 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:34:10.347104    7147 start.go:159] libmachine.API.Create for "docker-flags-116000" (driver="qemu2")
	I0906 12:34:10.347184    7147 client.go:168] LocalClient.Create starting
	I0906 12:34:10.347334    7147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:34:10.347386    7147 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:10.347403    7147 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:10.347496    7147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:34:10.347526    7147 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:10.347538    7147 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:10.348085    7147 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:34:10.520182    7147 main.go:141] libmachine: Creating SSH key...
	I0906 12:34:10.611629    7147 main.go:141] libmachine: Creating Disk image...
	I0906 12:34:10.611635    7147 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:34:10.611811    7147 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2
	I0906 12:34:10.620929    7147 main.go:141] libmachine: STDOUT: 
	I0906 12:34:10.620949    7147 main.go:141] libmachine: STDERR: 
	I0906 12:34:10.620995    7147 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2 +20000M
	I0906 12:34:10.628949    7147 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:34:10.628970    7147 main.go:141] libmachine: STDERR: 
	I0906 12:34:10.628980    7147 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2
	I0906 12:34:10.628986    7147 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:34:10.628995    7147 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:34:10.629022    7147 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:46:fa:46:4a:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/docker-flags-116000/disk.qcow2
	I0906 12:34:10.630671    7147 main.go:141] libmachine: STDOUT: 
	I0906 12:34:10.630688    7147 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:34:10.630700    7147 client.go:171] duration metric: took 283.50225ms to LocalClient.Create
	I0906 12:34:12.632954    7147 start.go:128] duration metric: took 2.346335084s to createHost
	I0906 12:34:12.633059    7147 start.go:83] releasing machines lock for "docker-flags-116000", held for 2.346902667s
	W0906 12:34:12.633469    7147 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-116000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-116000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:12.644907    7147 out.go:201] 
	W0906 12:34:12.649009    7147 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:34:12.649053    7147 out.go:270] * 
	* 
	W0906 12:34:12.651791    7147 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:34:12.658950    7147 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-116000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-116000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-116000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (82.139292ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-116000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-116000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-116000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-116000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-116000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-116000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-116000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-116000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-116000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.324125ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-116000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-116000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-116000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-116000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-116000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-116000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-09-06 12:34:12.802928 -0700 PDT m=+3937.425946251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-116000 -n docker-flags-116000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-116000 -n docker-flags-116000: exit status 7 (30.197584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-116000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-116000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-116000
--- FAIL: TestDockerFlags (12.20s)

                                                
                                    
x
+
TestForceSystemdFlag (10.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-941000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-941000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.97209325s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-941000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-941000" primary control-plane node in "force-systemd-flag-941000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-941000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:33:33.690040    7027 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:33:33.690404    7027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:33:33.690423    7027 out.go:358] Setting ErrFile to fd 2...
	I0906 12:33:33.690426    7027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:33:33.690612    7027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:33:33.691941    7027 out.go:352] Setting JSON to false
	I0906 12:33:33.708600    7027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5583,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:33:33.708668    7027 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:33:33.712897    7027 out.go:177] * [force-systemd-flag-941000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:33:33.720054    7027 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:33:33.720101    7027 notify.go:220] Checking for updates...
	I0906 12:33:33.726870    7027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:33:33.729902    7027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:33:33.732866    7027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:33:33.735850    7027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:33:33.738870    7027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:33:33.742181    7027 config.go:182] Loaded profile config "NoKubernetes-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0906 12:33:33.742260    7027 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:33:33.742309    7027 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:33:33.746849    7027 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:33:33.753849    7027 start.go:297] selected driver: qemu2
	I0906 12:33:33.753858    7027 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:33:33.753865    7027 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:33:33.756253    7027 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:33:33.759873    7027 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:33:33.763046    7027 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 12:33:33.763087    7027 cni.go:84] Creating CNI manager for ""
	I0906 12:33:33.763100    7027 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:33:33.763104    7027 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:33:33.763132    7027 start.go:340] cluster config:
	{Name:force-systemd-flag-941000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:33:33.766856    7027 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:33:33.774869    7027 out.go:177] * Starting "force-systemd-flag-941000" primary control-plane node in "force-systemd-flag-941000" cluster
	I0906 12:33:33.778680    7027 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:33:33.778699    7027 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:33:33.778709    7027 cache.go:56] Caching tarball of preloaded images
	I0906 12:33:33.778780    7027 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:33:33.778786    7027 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:33:33.778859    7027 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/force-systemd-flag-941000/config.json ...
	I0906 12:33:33.778878    7027 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/force-systemd-flag-941000/config.json: {Name:mkb47f48c6014612926a5c57ac9a357d97dda746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:33:33.779116    7027 start.go:360] acquireMachinesLock for force-systemd-flag-941000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:33:33.779155    7027 start.go:364] duration metric: took 31.167µs to acquireMachinesLock for "force-systemd-flag-941000"
	I0906 12:33:33.779176    7027 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-sy
stemd-flag-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:33:33.779238    7027 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:33:33.787699    7027 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:33:33.806113    7027 start.go:159] libmachine.API.Create for "force-systemd-flag-941000" (driver="qemu2")
	I0906 12:33:33.806146    7027 client.go:168] LocalClient.Create starting
	I0906 12:33:33.806213    7027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:33:33.806246    7027 main.go:141] libmachine: Decoding PEM data...
	I0906 12:33:33.806271    7027 main.go:141] libmachine: Parsing certificate...
	I0906 12:33:33.806308    7027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:33:33.806333    7027 main.go:141] libmachine: Decoding PEM data...
	I0906 12:33:33.806342    7027 main.go:141] libmachine: Parsing certificate...
	I0906 12:33:33.806725    7027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:33:34.061982    7027 main.go:141] libmachine: Creating SSH key...
	I0906 12:33:34.141395    7027 main.go:141] libmachine: Creating Disk image...
	I0906 12:33:34.141402    7027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:33:34.141601    7027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0906 12:33:34.151018    7027 main.go:141] libmachine: STDOUT: 
	I0906 12:33:34.151036    7027 main.go:141] libmachine: STDERR: 
	I0906 12:33:34.151083    7027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2 +20000M
	I0906 12:33:34.158928    7027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:33:34.158943    7027 main.go:141] libmachine: STDERR: 
	I0906 12:33:34.158966    7027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0906 12:33:34.158976    7027 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:33:34.158989    7027 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:33:34.159018    7027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:77:29:6a:70:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0906 12:33:34.160601    7027 main.go:141] libmachine: STDOUT: 
	I0906 12:33:34.160618    7027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:33:34.160637    7027 client.go:171] duration metric: took 354.488042ms to LocalClient.Create
	I0906 12:33:36.162971    7027 start.go:128] duration metric: took 2.38372975s to createHost
	I0906 12:33:36.163030    7027 start.go:83] releasing machines lock for "force-systemd-flag-941000", held for 2.383877625s
	W0906 12:33:36.163079    7027 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:33:36.175271    7027 out.go:177] * Deleting "force-systemd-flag-941000" in qemu2 ...
	W0906 12:33:36.209378    7027 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:33:36.209412    7027 start.go:729] Will try again in 5 seconds ...
	I0906 12:33:41.211525    7027 start.go:360] acquireMachinesLock for force-systemd-flag-941000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:33:41.211987    7027 start.go:364] duration metric: took 355.75µs to acquireMachinesLock for "force-systemd-flag-941000"
	I0906 12:33:41.212129    7027 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-sy
stemd-flag-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:33:41.212382    7027 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:33:41.216913    7027 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:33:41.266845    7027 start.go:159] libmachine.API.Create for "force-systemd-flag-941000" (driver="qemu2")
	I0906 12:33:41.266888    7027 client.go:168] LocalClient.Create starting
	I0906 12:33:41.267014    7027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:33:41.267074    7027 main.go:141] libmachine: Decoding PEM data...
	I0906 12:33:41.267090    7027 main.go:141] libmachine: Parsing certificate...
	I0906 12:33:41.267160    7027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:33:41.267203    7027 main.go:141] libmachine: Decoding PEM data...
	I0906 12:33:41.267213    7027 main.go:141] libmachine: Parsing certificate...
	I0906 12:33:41.267735    7027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:33:41.441307    7027 main.go:141] libmachine: Creating SSH key...
	I0906 12:33:41.571656    7027 main.go:141] libmachine: Creating Disk image...
	I0906 12:33:41.571663    7027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:33:41.571908    7027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0906 12:33:41.581721    7027 main.go:141] libmachine: STDOUT: 
	I0906 12:33:41.581739    7027 main.go:141] libmachine: STDERR: 
	I0906 12:33:41.581784    7027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2 +20000M
	I0906 12:33:41.589670    7027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:33:41.589685    7027 main.go:141] libmachine: STDERR: 
	I0906 12:33:41.589696    7027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0906 12:33:41.589702    7027 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:33:41.589711    7027 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:33:41.589745    7027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:6c:5b:ce:bf:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-flag-941000/disk.qcow2
	I0906 12:33:41.591383    7027 main.go:141] libmachine: STDOUT: 
	I0906 12:33:41.591400    7027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:33:41.591412    7027 client.go:171] duration metric: took 324.521584ms to LocalClient.Create
	I0906 12:33:43.593584    7027 start.go:128] duration metric: took 2.381170792s to createHost
	I0906 12:33:43.593786    7027 start.go:83] releasing machines lock for "force-systemd-flag-941000", held for 2.38164575s
	W0906 12:33:43.594158    7027 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:33:43.602925    7027 out.go:201] 
	W0906 12:33:43.606968    7027 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:33:43.606992    7027 out.go:270] * 
	* 
	W0906 12:33:43.609591    7027 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:33:43.618896    7027 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-941000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-941000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-941000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.514291ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-941000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-941000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-941000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-06 12:33:43.711053 -0700 PDT m=+3908.333860834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-941000 -n force-systemd-flag-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-941000 -n force-systemd-flag-941000: exit status 7 (34.189917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-941000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-941000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-941000
--- FAIL: TestForceSystemdFlag (10.16s)

                                                
                                    
x
+
TestForceSystemdEnv (10.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-067000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-067000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.889928375s)

                                                
                                                
-- stdout --
	* [force-systemd-env-067000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-067000" primary control-plane node in "force-systemd-env-067000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-067000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:33:50.637531    7095 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:33:50.637649    7095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:33:50.637652    7095 out.go:358] Setting ErrFile to fd 2...
	I0906 12:33:50.637654    7095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:33:50.637794    7095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:33:50.638925    7095 out.go:352] Setting JSON to false
	I0906 12:33:50.655107    7095 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5600,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:33:50.655187    7095 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:33:50.662464    7095 out.go:177] * [force-systemd-env-067000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:33:50.672275    7095 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:33:50.672305    7095 notify.go:220] Checking for updates...
	I0906 12:33:50.677155    7095 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:33:50.680256    7095 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:33:50.683246    7095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:33:50.686177    7095 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:33:50.689237    7095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0906 12:33:50.692650    7095 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:33:50.692694    7095 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:33:50.697200    7095 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:33:50.704256    7095 start.go:297] selected driver: qemu2
	I0906 12:33:50.704265    7095 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:33:50.704273    7095 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:33:50.706555    7095 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:33:50.709211    7095 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:33:50.712337    7095 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 12:33:50.712357    7095 cni.go:84] Creating CNI manager for ""
	I0906 12:33:50.712366    7095 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:33:50.712378    7095 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:33:50.712415    7095 start.go:340] cluster config:
	{Name:force-systemd-env-067000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-067000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:33:50.716025    7095 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:33:50.722229    7095 out.go:177] * Starting "force-systemd-env-067000" primary control-plane node in "force-systemd-env-067000" cluster
	I0906 12:33:50.726235    7095 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:33:50.726249    7095 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:33:50.726259    7095 cache.go:56] Caching tarball of preloaded images
	I0906 12:33:50.726312    7095 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:33:50.726317    7095 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:33:50.726371    7095 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/force-systemd-env-067000/config.json ...
	I0906 12:33:50.726383    7095 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/force-systemd-env-067000/config.json: {Name:mka9e1104869e310085983c74bc44be6cc78f6f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:33:50.726754    7095 start.go:360] acquireMachinesLock for force-systemd-env-067000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:33:50.726791    7095 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "force-systemd-env-067000"
	I0906 12:33:50.726804    7095 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-sys
temd-env-067000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:33:50.726834    7095 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:33:50.731161    7095 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:33:50.748404    7095 start.go:159] libmachine.API.Create for "force-systemd-env-067000" (driver="qemu2")
	I0906 12:33:50.748428    7095 client.go:168] LocalClient.Create starting
	I0906 12:33:50.748481    7095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:33:50.748511    7095 main.go:141] libmachine: Decoding PEM data...
	I0906 12:33:50.748519    7095 main.go:141] libmachine: Parsing certificate...
	I0906 12:33:50.748559    7095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:33:50.748582    7095 main.go:141] libmachine: Decoding PEM data...
	I0906 12:33:50.748590    7095 main.go:141] libmachine: Parsing certificate...
	I0906 12:33:50.748926    7095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:33:50.907905    7095 main.go:141] libmachine: Creating SSH key...
	I0906 12:33:50.997552    7095 main.go:141] libmachine: Creating Disk image...
	I0906 12:33:50.997557    7095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:33:50.997780    7095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2
	I0906 12:33:51.007310    7095 main.go:141] libmachine: STDOUT: 
	I0906 12:33:51.007333    7095 main.go:141] libmachine: STDERR: 
	I0906 12:33:51.007391    7095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2 +20000M
	I0906 12:33:51.015263    7095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:33:51.015285    7095 main.go:141] libmachine: STDERR: 
	I0906 12:33:51.015300    7095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2
	I0906 12:33:51.015304    7095 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:33:51.015317    7095 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:33:51.015352    7095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:7c:f4:b2:ab:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2
	I0906 12:33:51.016933    7095 main.go:141] libmachine: STDOUT: 
	I0906 12:33:51.016949    7095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:33:51.016973    7095 client.go:171] duration metric: took 268.544375ms to LocalClient.Create
	I0906 12:33:53.019155    7095 start.go:128] duration metric: took 2.292316458s to createHost
	I0906 12:33:53.019288    7095 start.go:83] releasing machines lock for "force-systemd-env-067000", held for 2.292455084s
	W0906 12:33:53.019371    7095 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:33:53.027525    7095 out.go:177] * Deleting "force-systemd-env-067000" in qemu2 ...
	W0906 12:33:53.064543    7095 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:33:53.064577    7095 start.go:729] Will try again in 5 seconds ...
	I0906 12:33:58.065217    7095 start.go:360] acquireMachinesLock for force-systemd-env-067000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:33:58.065674    7095 start.go:364] duration metric: took 384.166µs to acquireMachinesLock for "force-systemd-env-067000"
	I0906 12:33:58.065809    7095 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-sys
temd-env-067000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:33:58.066072    7095 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:33:58.076846    7095 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:33:58.126364    7095 start.go:159] libmachine.API.Create for "force-systemd-env-067000" (driver="qemu2")
	I0906 12:33:58.126415    7095 client.go:168] LocalClient.Create starting
	I0906 12:33:58.126520    7095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:33:58.126576    7095 main.go:141] libmachine: Decoding PEM data...
	I0906 12:33:58.126592    7095 main.go:141] libmachine: Parsing certificate...
	I0906 12:33:58.126654    7095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:33:58.126696    7095 main.go:141] libmachine: Decoding PEM data...
	I0906 12:33:58.126708    7095 main.go:141] libmachine: Parsing certificate...
	I0906 12:33:58.127147    7095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:33:58.299326    7095 main.go:141] libmachine: Creating SSH key...
	I0906 12:33:58.436373    7095 main.go:141] libmachine: Creating Disk image...
	I0906 12:33:58.436382    7095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:33:58.436586    7095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2
	I0906 12:33:58.446845    7095 main.go:141] libmachine: STDOUT: 
	I0906 12:33:58.446882    7095 main.go:141] libmachine: STDERR: 
	I0906 12:33:58.446952    7095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2 +20000M
	I0906 12:33:58.455913    7095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:33:58.455935    7095 main.go:141] libmachine: STDERR: 
	I0906 12:33:58.455956    7095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2
	I0906 12:33:58.455960    7095 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:33:58.455974    7095 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:33:58.455999    7095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:a6:cf:41:51:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/force-systemd-env-067000/disk.qcow2
	I0906 12:33:58.458821    7095 main.go:141] libmachine: STDOUT: 
	I0906 12:33:58.458847    7095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:33:58.458867    7095 client.go:171] duration metric: took 332.4495ms to LocalClient.Create
	I0906 12:34:00.461081    7095 start.go:128] duration metric: took 2.394989584s to createHost
	I0906 12:34:00.461163    7095 start.go:83] releasing machines lock for "force-systemd-env-067000", held for 2.39548125s
	W0906 12:34:00.461531    7095 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-067000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-067000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:00.474997    7095 out.go:201] 
	W0906 12:34:00.478075    7095 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:34:00.478118    7095 out.go:270] * 
	* 
	W0906 12:34:00.481231    7095 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:34:00.489925    7095 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-067000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-067000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-067000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (72.688959ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-067000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-067000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-067000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-06 12:34:00.574271 -0700 PDT m=+3925.197201084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-067000 -n force-systemd-env-067000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-067000 -n force-systemd-env-067000: exit status 7 (35.879417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-067000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-067000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-067000
--- FAIL: TestForceSystemdEnv (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (30.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-152000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-152000 expose deployment hello-node-connect --type=NodePort --port=8080
E0906 11:47:22.605397    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-lpwm6" [464db546-6f59-437a-a87a-f29fda80c538] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0906 11:47:22.927790    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:47:23.571488    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-65d86f57f4-lpwm6" [464db546-6f59-437a-a87a-f29fda80c538] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0906 11:47:24.855174    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.073566958s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31792
functional_test.go:1661: error fetching http://192.168.105.4:31792: Get "http://192.168.105.4:31792": dial tcp 192.168.105.4:31792: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31792: Get "http://192.168.105.4:31792": dial tcp 192.168.105.4:31792: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31792: Get "http://192.168.105.4:31792": dial tcp 192.168.105.4:31792: connect: connection refused
E0906 11:47:32.541852    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:31792: Get "http://192.168.105.4:31792": dial tcp 192.168.105.4:31792: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31792: Get "http://192.168.105.4:31792": dial tcp 192.168.105.4:31792: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31792: Get "http://192.168.105.4:31792": dial tcp 192.168.105.4:31792: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31792: Get "http://192.168.105.4:31792": dial tcp 192.168.105.4:31792: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31792: Get "http://192.168.105.4:31792": dial tcp 192.168.105.4:31792: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-152000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-lpwm6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-152000/192.168.105.4
Start Time:       Fri, 06 Sep 2024 11:47:22 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://9a5b516e22a4724ff292bde504d3aaa9232695b0b953f683a9c223906813535c
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Fri, 06 Sep 2024 11:47:40 -0700
Finished:     Fri, 06 Sep 2024 11:47:40 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zfxdc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zfxdc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  29s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-lpwm6 to functional-152000
Normal   Pulled     11s (x3 over 28s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    11s (x3 over 28s)  kubelet            Created container echoserver-arm
Normal   Started    11s (x3 over 28s)  kubelet            Started container echoserver-arm
Warning  BackOff    0s (x4 over 26s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-lpwm6_default(464db546-6f59-437a-a87a-f29fda80c538)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-152000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-152000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.23.122
IPs:                      10.109.23.122
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31792/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-152000 -n functional-152000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                      |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-152000 image ls                                                                                     | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	| image   | functional-152000 image save                                                                                   | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | kicbase/echo-server:functional-152000                                                                          |                   |         |         |                     |                     |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                  |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| image   | functional-152000 image rm                                                                                     | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | kicbase/echo-server:functional-152000                                                                          |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| image   | functional-152000 image ls                                                                                     | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	| image   | functional-152000 image load                                                                                   | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                  |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| image   | functional-152000 image ls                                                                                     | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	| image   | functional-152000 image save --daemon                                                                          | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | kicbase/echo-server:functional-152000                                                                          |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-152000 ssh echo                                                                                     | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | hello                                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-152000 ssh cat                                                                                      | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | /etc/hostname                                                                                                  |                   |         |         |                     |                     |
	| tunnel  | functional-152000 tunnel                                                                                       | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| tunnel  | functional-152000 tunnel                                                                                       | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| tunnel  | functional-152000 tunnel                                                                                       | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| service | functional-152000 service list                                                                                 | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	| service | functional-152000 service list                                                                                 | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | -o json                                                                                                        |                   |         |         |                     |                     |
	| service | functional-152000 service                                                                                      | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | --namespace=default --https                                                                                    |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                               |                   |         |         |                     |                     |
	| service | functional-152000                                                                                              | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | service hello-node --url                                                                                       |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                               |                   |         |         |                     |                     |
	| service | functional-152000 service                                                                                      | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | hello-node --url                                                                                               |                   |         |         |                     |                     |
	| addons  | functional-152000 addons list                                                                                  | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	| addons  | functional-152000 addons list                                                                                  | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | -o json                                                                                                        |                   |         |         |                     |                     |
	| service | functional-152000 service                                                                                      | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | hello-node-connect --url                                                                                       |                   |         |         |                     |                     |
	| ssh     | functional-152000 ssh findmnt                                                                                  | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                         |                   |         |         |                     |                     |
	| mount   | -p functional-152000                                                                                           | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port371262820/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-152000 ssh findmnt                                                                                  | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | -T /mount-9p | grep 9p                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-152000 ssh -- ls                                                                                    | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | -la /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-152000 ssh cat                                                                                      | functional-152000 | jenkins | v1.34.0 | 06 Sep 24 11:47 PDT | 06 Sep 24 11:47 PDT |
	|         | /mount-9p/test-1725648467342873000                                                                             |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 11:46:25
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 11:46:25.903127    4114 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:46:25.903259    4114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:46:25.903261    4114 out.go:358] Setting ErrFile to fd 2...
	I0906 11:46:25.903263    4114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:46:25.903402    4114 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 11:46:25.904412    4114 out.go:352] Setting JSON to false
	I0906 11:46:25.920779    4114 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2755,"bootTime":1725645630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 11:46:25.920845    4114 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:46:25.925529    4114 out.go:177] * [functional-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 11:46:25.934491    4114 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 11:46:25.934551    4114 notify.go:220] Checking for updates...
	I0906 11:46:25.942392    4114 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 11:46:25.946524    4114 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 11:46:25.949554    4114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:46:25.950839    4114 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 11:46:25.953522    4114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 11:46:25.956778    4114 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:46:25.956823    4114 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:46:25.961337    4114 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 11:46:25.968533    4114 start.go:297] selected driver: qemu2
	I0906 11:46:25.968537    4114 start.go:901] validating driver "qemu2" against &{Name:functional-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-152000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:46:25.968586    4114 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 11:46:25.970813    4114 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 11:46:25.970849    4114 cni.go:84] Creating CNI manager for ""
	I0906 11:46:25.970854    4114 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:46:25.970895    4114 start.go:340] cluster config:
	{Name:functional-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:46:25.974276    4114 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:46:25.982550    4114 out.go:177] * Starting "functional-152000" primary control-plane node in "functional-152000" cluster
	I0906 11:46:25.986527    4114 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:46:25.986540    4114 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 11:46:25.986547    4114 cache.go:56] Caching tarball of preloaded images
	I0906 11:46:25.986604    4114 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 11:46:25.986608    4114 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 11:46:25.986661    4114 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/config.json ...
	I0906 11:46:25.987081    4114 start.go:360] acquireMachinesLock for functional-152000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 11:46:25.987119    4114 start.go:364] duration metric: took 32.584µs to acquireMachinesLock for "functional-152000"
	I0906 11:46:25.987127    4114 start.go:96] Skipping create...Using existing machine configuration
	I0906 11:46:25.987130    4114 fix.go:54] fixHost starting: 
	I0906 11:46:25.987751    4114 fix.go:112] recreateIfNeeded on functional-152000: state=Running err=<nil>
	W0906 11:46:25.987757    4114 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 11:46:25.995537    4114 out.go:177] * Updating the running qemu2 "functional-152000" VM ...
	I0906 11:46:25.999437    4114 machine.go:93] provisionDockerMachine start ...
	I0906 11:46:25.999467    4114 main.go:141] libmachine: Using SSH client type: native
	I0906 11:46:25.999586    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b445a0] 0x102b46e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 11:46:25.999589    4114 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 11:46:26.052973    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-152000
	
	I0906 11:46:26.052991    4114 buildroot.go:166] provisioning hostname "functional-152000"
	I0906 11:46:26.053036    4114 main.go:141] libmachine: Using SSH client type: native
	I0906 11:46:26.053160    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b445a0] 0x102b46e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 11:46:26.053164    4114 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-152000 && echo "functional-152000" | sudo tee /etc/hostname
	I0906 11:46:26.112005    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-152000
	
	I0906 11:46:26.112051    4114 main.go:141] libmachine: Using SSH client type: native
	I0906 11:46:26.112211    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b445a0] 0x102b46e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 11:46:26.112217    4114 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-152000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-152000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-152000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 11:46:26.170292    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 11:46:26.170301    4114 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-2143/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-2143/.minikube}
	I0906 11:46:26.170314    4114 buildroot.go:174] setting up certificates
	I0906 11:46:26.170317    4114 provision.go:84] configureAuth start
	I0906 11:46:26.170321    4114 provision.go:143] copyHostCerts
	I0906 11:46:26.170403    4114 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem, removing ...
	I0906 11:46:26.170411    4114 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem
	I0906 11:46:26.170533    4114 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem (1082 bytes)
	I0906 11:46:26.170697    4114 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem, removing ...
	I0906 11:46:26.170699    4114 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem
	I0906 11:46:26.170751    4114 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem (1123 bytes)
	I0906 11:46:26.171091    4114 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem, removing ...
	I0906 11:46:26.171094    4114 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem
	I0906 11:46:26.171162    4114 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem (1675 bytes)
	I0906 11:46:26.171264    4114 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem org=jenkins.functional-152000 san=[127.0.0.1 192.168.105.4 functional-152000 localhost minikube]
	I0906 11:46:26.298908    4114 provision.go:177] copyRemoteCerts
	I0906 11:46:26.298938    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 11:46:26.298945    4114 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/functional-152000/id_rsa Username:docker}
	I0906 11:46:26.329588    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 11:46:26.337716    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 11:46:26.345798    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0906 11:46:26.354752    4114 provision.go:87] duration metric: took 184.432083ms to configureAuth
	I0906 11:46:26.354759    4114 buildroot.go:189] setting minikube options for container-runtime
	I0906 11:46:26.354882    4114 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:46:26.354926    4114 main.go:141] libmachine: Using SSH client type: native
	I0906 11:46:26.355018    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b445a0] 0x102b46e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 11:46:26.355021    4114 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 11:46:26.410461    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 11:46:26.410467    4114 buildroot.go:70] root file system type: tmpfs
	I0906 11:46:26.410515    4114 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 11:46:26.410579    4114 main.go:141] libmachine: Using SSH client type: native
	I0906 11:46:26.410703    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b445a0] 0x102b46e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 11:46:26.410734    4114 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 11:46:26.471738    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 11:46:26.471790    4114 main.go:141] libmachine: Using SSH client type: native
	I0906 11:46:26.471917    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b445a0] 0x102b46e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 11:46:26.471923    4114 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 11:46:26.527421    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 11:46:26.527427    4114 machine.go:96] duration metric: took 527.993416ms to provisionDockerMachine
	I0906 11:46:26.527431    4114 start.go:293] postStartSetup for "functional-152000" (driver="qemu2")
	I0906 11:46:26.527437    4114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 11:46:26.527478    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 11:46:26.527485    4114 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/functional-152000/id_rsa Username:docker}
	I0906 11:46:26.556559    4114 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 11:46:26.558077    4114 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 11:46:26.558082    4114 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-2143/.minikube/addons for local assets ...
	I0906 11:46:26.558176    4114 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-2143/.minikube/files for local assets ...
	I0906 11:46:26.558311    4114 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem -> 26722.pem in /etc/ssl/certs
	I0906 11:46:26.558427    4114 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/test/nested/copy/2672/hosts -> hosts in /etc/test/nested/copy/2672
	I0906 11:46:26.558459    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2672
	I0906 11:46:26.561557    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem --> /etc/ssl/certs/26722.pem (1708 bytes)
	I0906 11:46:26.569587    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/test/nested/copy/2672/hosts --> /etc/test/nested/copy/2672/hosts (40 bytes)
	I0906 11:46:26.578532    4114 start.go:296] duration metric: took 51.096208ms for postStartSetup
	I0906 11:46:26.578543    4114 fix.go:56] duration metric: took 591.421ms for fixHost
	I0906 11:46:26.578586    4114 main.go:141] libmachine: Using SSH client type: native
	I0906 11:46:26.578704    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102b445a0] 0x102b46e00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0906 11:46:26.578707    4114 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 11:46:26.631033    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725648386.735796170
	
	I0906 11:46:26.631039    4114 fix.go:216] guest clock: 1725648386.735796170
	I0906 11:46:26.631042    4114 fix.go:229] Guest: 2024-09-06 11:46:26.73579617 -0700 PDT Remote: 2024-09-06 11:46:26.578544 -0700 PDT m=+0.695912876 (delta=157.25217ms)
	I0906 11:46:26.631052    4114 fix.go:200] guest clock delta is within tolerance: 157.25217ms
	I0906 11:46:26.631054    4114 start.go:83] releasing machines lock for "functional-152000", held for 643.940417ms
	I0906 11:46:26.631328    4114 ssh_runner.go:195] Run: cat /version.json
	I0906 11:46:26.631334    4114 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/functional-152000/id_rsa Username:docker}
	I0906 11:46:26.631350    4114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 11:46:26.631365    4114 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/functional-152000/id_rsa Username:docker}
	I0906 11:46:26.701629    4114 ssh_runner.go:195] Run: systemctl --version
	I0906 11:46:26.703813    4114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 11:46:26.705735    4114 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 11:46:26.705759    4114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 11:46:26.709623    4114 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 11:46:26.709629    4114 start.go:495] detecting cgroup driver to use...
	I0906 11:46:26.709694    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 11:46:26.716039    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 11:46:26.720300    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 11:46:26.724438    4114 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 11:46:26.724461    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 11:46:26.728422    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 11:46:26.732504    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 11:46:26.736465    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 11:46:26.740458    4114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 11:46:26.744563    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 11:46:26.748633    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 11:46:26.752574    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 11:46:26.756571    4114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 11:46:26.760020    4114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 11:46:26.764454    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:46:26.851462    4114 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 11:46:26.863089    4114 start.go:495] detecting cgroup driver to use...
	I0906 11:46:26.863141    4114 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 11:46:26.869604    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 11:46:26.875159    4114 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 11:46:26.887114    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 11:46:26.892445    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 11:46:26.898258    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 11:46:26.904677    4114 ssh_runner.go:195] Run: which cri-dockerd
	I0906 11:46:26.906283    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 11:46:26.909457    4114 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 11:46:26.915099    4114 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 11:46:27.028643    4114 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 11:46:27.125362    4114 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 11:46:27.125421    4114 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 11:46:27.132321    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:46:27.241938    4114 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 11:46:39.660082    4114 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.41828925s)
	I0906 11:46:39.660142    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 11:46:39.666597    4114 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 11:46:39.674159    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 11:46:39.679413    4114 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 11:46:39.773644    4114 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 11:46:39.867974    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:46:39.956936    4114 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 11:46:39.964543    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 11:46:39.970165    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:46:40.056961    4114 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 11:46:40.087089    4114 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 11:46:40.087152    4114 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 11:46:40.089671    4114 start.go:563] Will wait 60s for crictl version
	I0906 11:46:40.089701    4114 ssh_runner.go:195] Run: which crictl
	I0906 11:46:40.091126    4114 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 11:46:40.103586    4114 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 11:46:40.103668    4114 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 11:46:40.110904    4114 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 11:46:40.121109    4114 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 11:46:40.121178    4114 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0906 11:46:40.128185    4114 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0906 11:46:40.132141    4114 kubeadm.go:883] updating cluster {Name:functional-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-152000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 11:46:40.132202    4114 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:46:40.132264    4114 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 11:46:40.138124    4114 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-152000
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0906 11:46:40.138131    4114 docker.go:615] Images already preloaded, skipping extraction
	I0906 11:46:40.138172    4114 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 11:46:40.143855    4114 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-152000
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0906 11:46:40.143861    4114 cache_images.go:84] Images are preloaded, skipping loading
	I0906 11:46:40.143865    4114 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.0 docker true true} ...
	I0906 11:46:40.143925    4114 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-152000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 11:46:40.143973    4114 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 11:46:40.160110    4114 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0906 11:46:40.160177    4114 cni.go:84] Creating CNI manager for ""
	I0906 11:46:40.160183    4114 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:46:40.160186    4114 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 11:46:40.160195    4114 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-152000 NodeName:functional-152000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 11:46:40.160265    4114 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-152000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 11:46:40.160319    4114 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 11:46:40.164080    4114 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 11:46:40.164103    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 11:46:40.167401    4114 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0906 11:46:40.173308    4114 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 11:46:40.179223    4114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0906 11:46:40.185121    4114 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0906 11:46:40.186398    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:46:40.258856    4114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 11:46:40.264748    4114 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000 for IP: 192.168.105.4
	I0906 11:46:40.264752    4114 certs.go:194] generating shared ca certs ...
	I0906 11:46:40.264759    4114 certs.go:226] acquiring lock for ca certs: {Name:mkeb2acf337d35e5b807329b963b0c0723ad2fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:46:40.264903    4114 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key
	I0906 11:46:40.264956    4114 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key
	I0906 11:46:40.264965    4114 certs.go:256] generating profile certs ...
	I0906 11:46:40.265032    4114 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.key
	I0906 11:46:40.265084    4114 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/apiserver.key.e0c20b6f
	I0906 11:46:40.265133    4114 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/proxy-client.key
	I0906 11:46:40.265285    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672.pem (1338 bytes)
	W0906 11:46:40.265314    4114 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672_empty.pem, impossibly tiny 0 bytes
	I0906 11:46:40.265318    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 11:46:40.265338    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem (1082 bytes)
	I0906 11:46:40.265356    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem (1123 bytes)
	I0906 11:46:40.265371    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem (1675 bytes)
	I0906 11:46:40.265407    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem (1708 bytes)
	I0906 11:46:40.265730    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 11:46:40.274377    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 11:46:40.282459    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 11:46:40.290445    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 11:46:40.298384    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 11:46:40.306265    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 11:46:40.314186    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 11:46:40.322075    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 11:46:40.330264    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem --> /usr/share/ca-certificates/26722.pem (1708 bytes)
	I0906 11:46:40.338168    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 11:46:40.346360    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672.pem --> /usr/share/ca-certificates/2672.pem (1338 bytes)
	I0906 11:46:40.354463    4114 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 11:46:40.360552    4114 ssh_runner.go:195] Run: openssl version
	I0906 11:46:40.362764    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26722.pem && ln -fs /usr/share/ca-certificates/26722.pem /etc/ssl/certs/26722.pem"
	I0906 11:46:40.366288    4114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26722.pem
	I0906 11:46:40.367825    4114 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:44 /usr/share/ca-certificates/26722.pem
	I0906 11:46:40.367848    4114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26722.pem
	I0906 11:46:40.369993    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26722.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 11:46:40.373588    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 11:46:40.377559    4114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:46:40.379151    4114 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:46:40.379170    4114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:46:40.381205    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 11:46:40.385043    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2672.pem && ln -fs /usr/share/ca-certificates/2672.pem /etc/ssl/certs/2672.pem"
	I0906 11:46:40.389170    4114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2672.pem
	I0906 11:46:40.390791    4114 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:44 /usr/share/ca-certificates/2672.pem
	I0906 11:46:40.390809    4114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2672.pem
	I0906 11:46:40.392908    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2672.pem /etc/ssl/certs/51391683.0"
	I0906 11:46:40.396706    4114 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 11:46:40.398414    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 11:46:40.400525    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 11:46:40.402937    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 11:46:40.404850    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 11:46:40.406864    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 11:46:40.408889    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 11:46:40.410873    4114 kubeadm.go:392] StartCluster: {Name:functional-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-152000 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:46:40.410955    4114 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 11:46:40.416692    4114 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 11:46:40.421014    4114 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 11:46:40.421017    4114 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 11:46:40.421042    4114 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 11:46:40.424727    4114 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 11:46:40.425026    4114 kubeconfig.go:125] found "functional-152000" server: "https://192.168.105.4:8441"
	I0906 11:46:40.425663    4114 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 11:46:40.429268    4114 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0906 11:46:40.429271    4114 kubeadm.go:1160] stopping kube-system containers ...
	I0906 11:46:40.429311    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 11:46:40.436636    4114 docker.go:483] Stopping containers: [5c51321ca997 f2d3fb6f72d2 28af283a708d 0c3b2ccf0e27 6ca94d593630 af0616566f10 25f65f6241b7 15cc40d93868 0b3d5cea6ad1 acf714636111 d45cf0befd32 cadc0991e0c0 04601ed264ab 817cdaaa170d 73a5bdd72ddd a49587ced0c0 2a0896f85d14 42f4b7fd3fde 158f55030395 b4d6ffb233ea fbbe7ed5a587 6b1cf3a8358c 34c12739f5ee d7cd32413913 69d0759a3af9 5928c252fbab 90efbdf913b0 f350c66373bb]
	I0906 11:46:40.436692    4114 ssh_runner.go:195] Run: docker stop 5c51321ca997 f2d3fb6f72d2 28af283a708d 0c3b2ccf0e27 6ca94d593630 af0616566f10 25f65f6241b7 15cc40d93868 0b3d5cea6ad1 acf714636111 d45cf0befd32 cadc0991e0c0 04601ed264ab 817cdaaa170d 73a5bdd72ddd a49587ced0c0 2a0896f85d14 42f4b7fd3fde 158f55030395 b4d6ffb233ea fbbe7ed5a587 6b1cf3a8358c 34c12739f5ee d7cd32413913 69d0759a3af9 5928c252fbab 90efbdf913b0 f350c66373bb
	I0906 11:46:40.447550    4114 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 11:46:40.558092    4114 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 11:46:40.564179    4114 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Sep  6 18:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Sep  6 18:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep  6 18:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep  6 18:45 /etc/kubernetes/scheduler.conf
	
	I0906 11:46:40.564219    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0906 11:46:40.569384    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0906 11:46:40.573902    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0906 11:46:40.578112    4114 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 11:46:40.578135    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 11:46:40.581993    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0906 11:46:40.585650    4114 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 11:46:40.585673    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 11:46:40.589564    4114 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 11:46:40.593576    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 11:46:40.610634    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 11:46:41.249641    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 11:46:41.364201    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 11:46:41.385411    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 11:46:41.406851    4114 api_server.go:52] waiting for apiserver process to appear ...
	I0906 11:46:41.406923    4114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 11:46:41.907742    4114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 11:46:42.408950    4114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 11:46:42.414195    4114 api_server.go:72] duration metric: took 1.007358042s to wait for apiserver process to appear ...
	I0906 11:46:42.414201    4114 api_server.go:88] waiting for apiserver healthz status ...
	I0906 11:46:42.414209    4114 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 11:46:44.479746    4114 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 11:46:44.479758    4114 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 11:46:44.479764    4114 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 11:46:44.493428    4114 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 11:46:44.493436    4114 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 11:46:44.916357    4114 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 11:46:44.930440    4114 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 11:46:44.930498    4114 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 11:46:45.414602    4114 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 11:46:45.426291    4114 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 11:46:45.426313    4114 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 11:46:45.916267    4114 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 11:46:45.919339    4114 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0906 11:46:45.926079    4114 api_server.go:141] control plane version: v1.31.0
	I0906 11:46:45.926088    4114 api_server.go:131] duration metric: took 3.511929042s to wait for apiserver health ...
	I0906 11:46:45.926094    4114 cni.go:84] Creating CNI manager for ""
	I0906 11:46:45.926099    4114 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:46:46.007064    4114 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 11:46:46.011033    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 11:46:46.015223    4114 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 11:46:46.023285    4114 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 11:46:46.028870    4114 system_pods.go:59] 7 kube-system pods found
	I0906 11:46:46.028882    4114 system_pods.go:61] "coredns-6f6b679f8f-7twkf" [80e49f93-2ebf-473c-a0f5-1d708776cf91] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 11:46:46.028885    4114 system_pods.go:61] "etcd-functional-152000" [4546b350-cc12-41f7-8559-cb4e3890f739] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 11:46:46.028888    4114 system_pods.go:61] "kube-apiserver-functional-152000" [432a98fc-bf8f-4f64-ab04-5f80d5c7985a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 11:46:46.028890    4114 system_pods.go:61] "kube-controller-manager-functional-152000" [8e2ac1bf-b356-477a-bb81-b705561d8ca5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 11:46:46.028893    4114 system_pods.go:61] "kube-proxy-gd8mr" [f8e39fbf-0b2b-416e-a0ac-f644485a0adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 11:46:46.028894    4114 system_pods.go:61] "kube-scheduler-functional-152000" [043e09ab-5a15-490b-9e23-604c93137341] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 11:46:46.028896    4114 system_pods.go:61] "storage-provisioner" [337229e4-0d57-4b50-bcac-9715daaefc64] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 11:46:46.028899    4114 system_pods.go:74] duration metric: took 5.606875ms to wait for pod list to return data ...
	I0906 11:46:46.028902    4114 node_conditions.go:102] verifying NodePressure condition ...
	I0906 11:46:46.030676    4114 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 11:46:46.030682    4114 node_conditions.go:123] node cpu capacity is 2
	I0906 11:46:46.030688    4114 node_conditions.go:105] duration metric: took 1.783917ms to run NodePressure ...
	I0906 11:46:46.030695    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 11:46:46.265957    4114 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 11:46:46.269845    4114 kubeadm.go:739] kubelet initialised
	I0906 11:46:46.269853    4114 kubeadm.go:740] duration metric: took 3.882625ms waiting for restarted kubelet to initialise ...
	I0906 11:46:46.269859    4114 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 11:46:46.273875    4114 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-7twkf" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:48.281654    4114 pod_ready.go:103] pod "coredns-6f6b679f8f-7twkf" in "kube-system" namespace has status "Ready":"False"
	I0906 11:46:50.289788    4114 pod_ready.go:93] pod "coredns-6f6b679f8f-7twkf" in "kube-system" namespace has status "Ready":"True"
	I0906 11:46:50.289811    4114 pod_ready.go:82] duration metric: took 4.015974834s for pod "coredns-6f6b679f8f-7twkf" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:50.289830    4114 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:52.303948    4114 pod_ready.go:103] pod "etcd-functional-152000" in "kube-system" namespace has status "Ready":"False"
	I0906 11:46:54.305651    4114 pod_ready.go:103] pod "etcd-functional-152000" in "kube-system" namespace has status "Ready":"False"
	I0906 11:46:56.805796    4114 pod_ready.go:93] pod "etcd-functional-152000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:46:56.805824    4114 pod_ready.go:82] duration metric: took 6.516065s for pod "etcd-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:56.805838    4114 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:58.322378    4114 pod_ready.go:93] pod "kube-apiserver-functional-152000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:46:58.322402    4114 pod_ready.go:82] duration metric: took 1.516571s for pod "kube-apiserver-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:58.322419    4114 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:59.837427    4114 pod_ready.go:93] pod "kube-controller-manager-functional-152000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:46:59.837453    4114 pod_ready.go:82] duration metric: took 1.515039416s for pod "kube-controller-manager-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:59.837471    4114 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gd8mr" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:59.845239    4114 pod_ready.go:93] pod "kube-proxy-gd8mr" in "kube-system" namespace has status "Ready":"True"
	I0906 11:46:59.845250    4114 pod_ready.go:82] duration metric: took 7.771416ms for pod "kube-proxy-gd8mr" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:59.845259    4114 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:59.853143    4114 pod_ready.go:93] pod "kube-scheduler-functional-152000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:46:59.853157    4114 pod_ready.go:82] duration metric: took 7.889542ms for pod "kube-scheduler-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:46:59.853171    4114 pod_ready.go:39] duration metric: took 13.583474333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 11:46:59.853195    4114 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 11:46:59.864620    4114 ops.go:34] apiserver oom_adj: -16
	I0906 11:46:59.864631    4114 kubeadm.go:597] duration metric: took 19.443853416s to restartPrimaryControlPlane
	I0906 11:46:59.864638    4114 kubeadm.go:394] duration metric: took 19.454011334s to StartCluster
	I0906 11:46:59.864655    4114 settings.go:142] acquiring lock: {Name:mk12afd771d0c660db2e89d96a6968c1a28fb2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:46:59.864849    4114 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 11:46:59.865555    4114 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/kubeconfig: {Name:mkb103f2b581179fd959f22a1dc4c9c6720f9366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:46:59.866953    4114 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 11:46:59.866996    4114 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 11:46:59.867100    4114 addons.go:69] Setting storage-provisioner=true in profile "functional-152000"
	I0906 11:46:59.867116    4114 addons.go:69] Setting default-storageclass=true in profile "functional-152000"
	I0906 11:46:59.867127    4114 addons.go:234] Setting addon storage-provisioner=true in "functional-152000"
	W0906 11:46:59.867133    4114 addons.go:243] addon storage-provisioner should already be in state true
	I0906 11:46:59.867150    4114 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-152000"
	I0906 11:46:59.867157    4114 host.go:66] Checking if "functional-152000" exists ...
	I0906 11:46:59.867234    4114 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:46:59.869056    4114 addons.go:234] Setting addon default-storageclass=true in "functional-152000"
	W0906 11:46:59.869061    4114 addons.go:243] addon default-storageclass should already be in state true
	I0906 11:46:59.869072    4114 host.go:66] Checking if "functional-152000" exists ...
	I0906 11:46:59.871074    4114 out.go:177] * Verifying Kubernetes components...
	I0906 11:46:59.871667    4114 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 11:46:59.875238    4114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 11:46:59.875254    4114 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/functional-152000/id_rsa Username:docker}
	I0906 11:46:59.878945    4114 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 11:46:59.883047    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:46:59.885981    4114 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 11:46:59.885986    4114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 11:46:59.885994    4114 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/functional-152000/id_rsa Username:docker}
	I0906 11:47:00.000816    4114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 11:47:00.007948    4114 node_ready.go:35] waiting up to 6m0s for node "functional-152000" to be "Ready" ...
	I0906 11:47:00.009508    4114 node_ready.go:49] node "functional-152000" has status "Ready":"True"
	I0906 11:47:00.009520    4114 node_ready.go:38] duration metric: took 1.55525ms for node "functional-152000" to be "Ready" ...
	I0906 11:47:00.009523    4114 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 11:47:00.012457    4114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 11:47:00.012879    4114 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7twkf" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:00.015337    4114 pod_ready.go:93] pod "coredns-6f6b679f8f-7twkf" in "kube-system" namespace has status "Ready":"True"
	I0906 11:47:00.015340    4114 pod_ready.go:82] duration metric: took 2.456041ms for pod "coredns-6f6b679f8f-7twkf" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:00.015344    4114 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:00.066661    4114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 11:47:00.343276    4114 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0906 11:47:00.347468    4114 addons.go:510] duration metric: took 480.508458ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0906 11:47:00.395379    4114 pod_ready.go:93] pod "etcd-functional-152000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:47:00.395383    4114 pod_ready.go:82] duration metric: took 380.042041ms for pod "etcd-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:00.395387    4114 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:00.801765    4114 pod_ready.go:93] pod "kube-apiserver-functional-152000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:47:00.801795    4114 pod_ready.go:82] duration metric: took 406.403917ms for pod "kube-apiserver-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:00.801821    4114 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:01.201534    4114 pod_ready.go:93] pod "kube-controller-manager-functional-152000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:47:01.201566    4114 pod_ready.go:82] duration metric: took 399.731708ms for pod "kube-controller-manager-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:01.201586    4114 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gd8mr" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:01.598295    4114 pod_ready.go:93] pod "kube-proxy-gd8mr" in "kube-system" namespace has status "Ready":"True"
	I0906 11:47:01.598311    4114 pod_ready.go:82] duration metric: took 396.717583ms for pod "kube-proxy-gd8mr" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:01.598322    4114 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:01.996719    4114 pod_ready.go:93] pod "kube-scheduler-functional-152000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:47:01.996728    4114 pod_ready.go:82] duration metric: took 398.403167ms for pod "kube-scheduler-functional-152000" in "kube-system" namespace to be "Ready" ...
	I0906 11:47:01.996736    4114 pod_ready.go:39] duration metric: took 1.98723275s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 11:47:01.996807    4114 api_server.go:52] waiting for apiserver process to appear ...
	I0906 11:47:01.996958    4114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 11:47:02.008192    4114 api_server.go:72] duration metric: took 2.141246958s to wait for apiserver process to appear ...
	I0906 11:47:02.008201    4114 api_server.go:88] waiting for apiserver healthz status ...
	I0906 11:47:02.008214    4114 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0906 11:47:02.012382    4114 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0906 11:47:02.013068    4114 api_server.go:141] control plane version: v1.31.0
	I0906 11:47:02.013074    4114 api_server.go:131] duration metric: took 4.869583ms to wait for apiserver health ...
	I0906 11:47:02.013077    4114 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 11:47:02.205621    4114 system_pods.go:59] 7 kube-system pods found
	I0906 11:47:02.205650    4114 system_pods.go:61] "coredns-6f6b679f8f-7twkf" [80e49f93-2ebf-473c-a0f5-1d708776cf91] Running
	I0906 11:47:02.205659    4114 system_pods.go:61] "etcd-functional-152000" [4546b350-cc12-41f7-8559-cb4e3890f739] Running
	I0906 11:47:02.205663    4114 system_pods.go:61] "kube-apiserver-functional-152000" [432a98fc-bf8f-4f64-ab04-5f80d5c7985a] Running
	I0906 11:47:02.205667    4114 system_pods.go:61] "kube-controller-manager-functional-152000" [8e2ac1bf-b356-477a-bb81-b705561d8ca5] Running
	I0906 11:47:02.205671    4114 system_pods.go:61] "kube-proxy-gd8mr" [f8e39fbf-0b2b-416e-a0ac-f644485a0adb] Running
	I0906 11:47:02.205675    4114 system_pods.go:61] "kube-scheduler-functional-152000" [043e09ab-5a15-490b-9e23-604c93137341] Running
	I0906 11:47:02.205678    4114 system_pods.go:61] "storage-provisioner" [337229e4-0d57-4b50-bcac-9715daaefc64] Running
	I0906 11:47:02.205684    4114 system_pods.go:74] duration metric: took 192.604708ms to wait for pod list to return data ...
	I0906 11:47:02.205695    4114 default_sa.go:34] waiting for default service account to be created ...
	I0906 11:47:02.401205    4114 default_sa.go:45] found service account: "default"
	I0906 11:47:02.401231    4114 default_sa.go:55] duration metric: took 195.530542ms for default service account to be created ...
	I0906 11:47:02.401247    4114 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 11:47:02.608099    4114 system_pods.go:86] 7 kube-system pods found
	I0906 11:47:02.608128    4114 system_pods.go:89] "coredns-6f6b679f8f-7twkf" [80e49f93-2ebf-473c-a0f5-1d708776cf91] Running
	I0906 11:47:02.608141    4114 system_pods.go:89] "etcd-functional-152000" [4546b350-cc12-41f7-8559-cb4e3890f739] Running
	I0906 11:47:02.608147    4114 system_pods.go:89] "kube-apiserver-functional-152000" [432a98fc-bf8f-4f64-ab04-5f80d5c7985a] Running
	I0906 11:47:02.608154    4114 system_pods.go:89] "kube-controller-manager-functional-152000" [8e2ac1bf-b356-477a-bb81-b705561d8ca5] Running
	I0906 11:47:02.608163    4114 system_pods.go:89] "kube-proxy-gd8mr" [f8e39fbf-0b2b-416e-a0ac-f644485a0adb] Running
	I0906 11:47:02.608168    4114 system_pods.go:89] "kube-scheduler-functional-152000" [043e09ab-5a15-490b-9e23-604c93137341] Running
	I0906 11:47:02.608173    4114 system_pods.go:89] "storage-provisioner" [337229e4-0d57-4b50-bcac-9715daaefc64] Running
	I0906 11:47:02.608185    4114 system_pods.go:126] duration metric: took 206.932667ms to wait for k8s-apps to be running ...
	I0906 11:47:02.608197    4114 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 11:47:02.608397    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 11:47:02.627938    4114 system_svc.go:56] duration metric: took 19.738834ms WaitForService to wait for kubelet
	I0906 11:47:02.627953    4114 kubeadm.go:582] duration metric: took 2.761013833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 11:47:02.627973    4114 node_conditions.go:102] verifying NodePressure condition ...
	I0906 11:47:02.802380    4114 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 11:47:02.802410    4114 node_conditions.go:123] node cpu capacity is 2
	I0906 11:47:02.802442    4114 node_conditions.go:105] duration metric: took 174.46425ms to run NodePressure ...
	I0906 11:47:02.802473    4114 start.go:241] waiting for startup goroutines ...
	I0906 11:47:02.802492    4114 start.go:246] waiting for cluster config update ...
	I0906 11:47:02.802516    4114 start.go:255] writing updated cluster config ...
	I0906 11:47:02.804121    4114 ssh_runner.go:195] Run: rm -f paused
	I0906 11:47:02.871229    4114 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0906 11:47:02.875427    4114 out.go:201] 
	W0906 11:47:02.879504    4114 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0906 11:47:02.883429    4114 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0906 11:47:02.891436    4114 out.go:177] * Done! kubectl is now configured to use "functional-152000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 06 18:47:40 functional-152000 dockerd[5725]: time="2024-09-06T18:47:40.631646724Z" level=warning msg="cleaning up after shim disconnected" id=9a5b516e22a4724ff292bde504d3aaa9232695b0b953f683a9c223906813535c namespace=moby
	Sep 06 18:47:40 functional-152000 dockerd[5725]: time="2024-09-06T18:47:40.631650809Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:47:40 functional-152000 cri-dockerd[5973]: time="2024-09-06T18:47:40Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Sep 06 18:47:40 functional-152000 dockerd[5725]: time="2024-09-06T18:47:40.909229494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 18:47:40 functional-152000 dockerd[5725]: time="2024-09-06T18:47:40.909339537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 18:47:40 functional-152000 dockerd[5725]: time="2024-09-06T18:47:40.909373926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:47:40 functional-152000 dockerd[5725]: time="2024-09-06T18:47:40.909443995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:47:48 functional-152000 dockerd[5725]: time="2024-09-06T18:47:48.916165325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 18:47:48 functional-152000 dockerd[5725]: time="2024-09-06T18:47:48.916196712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 18:47:48 functional-152000 dockerd[5725]: time="2024-09-06T18:47:48.916202840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:47:48 functional-152000 dockerd[5725]: time="2024-09-06T18:47:48.916232101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:47:48 functional-152000 cri-dockerd[5973]: time="2024-09-06T18:47:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b3c6797b1ebe043628fdf7e835e15e633bf4721494ea78755a097485afe3be21/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 06 18:47:50 functional-152000 cri-dockerd[5973]: time="2024-09-06T18:47:50Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 06 18:47:50 functional-152000 dockerd[5725]: time="2024-09-06T18:47:50.370345448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 18:47:50 functional-152000 dockerd[5725]: time="2024-09-06T18:47:50.370377669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 18:47:50 functional-152000 dockerd[5725]: time="2024-09-06T18:47:50.370394217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:47:50 functional-152000 dockerd[5725]: time="2024-09-06T18:47:50.370424312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:47:50 functional-152000 dockerd[5725]: time="2024-09-06T18:47:50.401468561Z" level=info msg="shim disconnected" id=a3fa6fda52aa481f6da9f74cd8fc779e0b100865b556c5585b126107ad0c2208 namespace=moby
	Sep 06 18:47:50 functional-152000 dockerd[5725]: time="2024-09-06T18:47:50.401520998Z" level=warning msg="cleaning up after shim disconnected" id=a3fa6fda52aa481f6da9f74cd8fc779e0b100865b556c5585b126107ad0c2208 namespace=moby
	Sep 06 18:47:50 functional-152000 dockerd[5725]: time="2024-09-06T18:47:50.401525791Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:47:50 functional-152000 dockerd[5719]: time="2024-09-06T18:47:50.401636751Z" level=info msg="ignoring event" container=a3fa6fda52aa481f6da9f74cd8fc779e0b100865b556c5585b126107ad0c2208 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:47:51 functional-152000 dockerd[5719]: time="2024-09-06T18:47:51.693875185Z" level=info msg="ignoring event" container=b3c6797b1ebe043628fdf7e835e15e633bf4721494ea78755a097485afe3be21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:47:51 functional-152000 dockerd[5725]: time="2024-09-06T18:47:51.694127531Z" level=info msg="shim disconnected" id=b3c6797b1ebe043628fdf7e835e15e633bf4721494ea78755a097485afe3be21 namespace=moby
	Sep 06 18:47:51 functional-152000 dockerd[5725]: time="2024-09-06T18:47:51.694174424Z" level=warning msg="cleaning up after shim disconnected" id=b3c6797b1ebe043628fdf7e835e15e633bf4721494ea78755a097485afe3be21 namespace=moby
	Sep 06 18:47:51 functional-152000 dockerd[5725]: time="2024-09-06T18:47:51.694179218Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a3fa6fda52aa4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 seconds ago        Exited              mount-munger              0                   b3c6797b1ebe0       busybox-mount
	1f01748ce126d       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                         12 seconds ago       Running             myfrontend                0                   79d05e59e78cb       sp-pod
	9a5b516e22a47       72565bf5bbedf                                                                                         12 seconds ago       Exited              echoserver-arm            2                   8306181cfcafc       hello-node-connect-65d86f57f4-lpwm6
	3217ee2c51c7b       72565bf5bbedf                                                                                         23 seconds ago       Exited              echoserver-arm            2                   98bf0304367ed       hello-node-64b4f8f9ff-w6kkx
	1292d4ef20ac6       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                         36 seconds ago       Running             nginx                     0                   61b8969595790       nginx-svc
	65470f88349d5       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   5508d6367fdea       coredns-6f6b679f8f-7twkf
	7177fef22eef7       71d55d66fd4ee                                                                                         About a minute ago   Running             kube-proxy                2                   9bcb05a6fb7a1       kube-proxy-gd8mr
	952bddde03c94       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   2fd19fac5faeb       storage-provisioner
	a196bd54a2db8       fcb0683e6bdbd                                                                                         About a minute ago   Running             kube-controller-manager   2                   2fe6e77026a2f       kube-controller-manager-functional-152000
	4bbf8f9d0b9cf       fbbbd428abb4d                                                                                         About a minute ago   Running             kube-scheduler            2                   3e4310faeb823       kube-scheduler-functional-152000
	ecadbabfa1150       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   44e7e8a23a712       etcd-functional-152000
	04fc9bd81509a       cd0f0ae0ec9e0                                                                                         About a minute ago   Running             kube-apiserver            0                   a407a5a5624bb       kube-apiserver-functional-152000
	5c51321ca997c       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   25f65f6241b76       storage-provisioner
	f2d3fb6f72d2d       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   6ca94d5936308       coredns-6f6b679f8f-7twkf
	28af283a708d8       71d55d66fd4ee                                                                                         About a minute ago   Exited              kube-proxy                1                   af0616566f105       kube-proxy-gd8mr
	15cc40d938688       fbbbd428abb4d                                                                                         About a minute ago   Exited              kube-scheduler            1                   cadc0991e0c0c       kube-scheduler-functional-152000
	0b3d5cea6ad1a       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   73a5bdd72ddd1       etcd-functional-152000
	acf7146361118       fcb0683e6bdbd                                                                                         About a minute ago   Exited              kube-controller-manager   1                   04601ed264abc       kube-controller-manager-functional-152000
	
	
	==> coredns [65470f88349d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56807 - 32935 "HINFO IN 6593387512151747770.1989517358922459076. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.00440471s
	[INFO] 10.244.0.1:40783 - 49976 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000103501s
	[INFO] 10.244.0.1:40761 - 40843 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000147061s
	[INFO] 10.244.0.1:2034 - 28073 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001468773s
	[INFO] 10.244.0.1:11977 - 8 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000142184s
	[INFO] 10.244.0.1:7046 - 59928 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000060775s
	[INFO] 10.244.0.1:13277 - 42217 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000100625s
	
	
	==> coredns [f2d3fb6f72d2] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41101 - 19430 "HINFO IN 6742108867718733250.868254855292763711. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.005320767s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-152000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-152000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=functional-152000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T11_45_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:45:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-152000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:47:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:47:45 +0000   Fri, 06 Sep 2024 18:45:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:47:45 +0000   Fri, 06 Sep 2024 18:45:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:47:45 +0000   Fri, 06 Sep 2024 18:45:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:47:45 +0000   Fri, 06 Sep 2024 18:45:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-152000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 32fc289d3e2744cb96f0c07205da8f67
	  System UUID:                32fc289d3e2744cb96f0c07205da8f67
	  Boot ID:                    c959707d-a187-4035-9eb4-1d907bb14d39
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-w6kkx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  default                     hello-node-connect-65d86f57f4-lpwm6          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-6f6b679f8f-7twkf                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m20s
	  kube-system                 etcd-functional-152000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m26s
	  kube-system                 kube-apiserver-functional-152000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-controller-manager-functional-152000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-gd8mr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-functional-152000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m19s                  kube-proxy       
	  Normal  Starting                 66s                    kube-proxy       
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m26s (x2 over 2m26s)  kubelet          Node functional-152000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m26s (x2 over 2m26s)  kubelet          Node functional-152000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m26s (x2 over 2m26s)  kubelet          Node functional-152000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m22s                  kubelet          Node functional-152000 status is now: NodeReady
	  Normal  RegisteredNode           2m21s                  node-controller  Node functional-152000 event: Registered Node functional-152000 in Controller
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)    kubelet          Node functional-152000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)    kubelet          Node functional-152000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 116s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     116s (x7 over 116s)    kubelet          Node functional-152000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           110s                   node-controller  Node functional-152000 event: Registered Node functional-152000 in Controller
	  Normal  Starting                 71s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s (x8 over 71s)      kubelet          Node functional-152000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x8 over 71s)      kubelet          Node functional-152000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x7 over 71s)      kubelet          Node functional-152000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           65s                    node-controller  Node functional-152000 event: Registered Node functional-152000 in Controller
	
	
	==> dmesg <==
	[  +3.403607] kauditd_printk_skb: 199 callbacks suppressed
	[Sep 6 18:46] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.029378] systemd-fstab-generator[4812]: Ignoring "noauto" option for root device
	[ +10.748109] systemd-fstab-generator[5250]: Ignoring "noauto" option for root device
	[  +0.053500] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.123204] systemd-fstab-generator[5284]: Ignoring "noauto" option for root device
	[  +0.103071] systemd-fstab-generator[5296]: Ignoring "noauto" option for root device
	[  +0.110143] systemd-fstab-generator[5310]: Ignoring "noauto" option for root device
	[  +5.103374] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.449083] systemd-fstab-generator[5926]: Ignoring "noauto" option for root device
	[  +0.093559] systemd-fstab-generator[5938]: Ignoring "noauto" option for root device
	[  +0.089531] systemd-fstab-generator[5950]: Ignoring "noauto" option for root device
	[  +0.099543] systemd-fstab-generator[5965]: Ignoring "noauto" option for root device
	[  +0.200887] systemd-fstab-generator[6133]: Ignoring "noauto" option for root device
	[  +1.090668] systemd-fstab-generator[6253]: Ignoring "noauto" option for root device
	[  +4.449780] kauditd_printk_skb: 199 callbacks suppressed
	[ +14.181692] systemd-fstab-generator[7275]: Ignoring "noauto" option for root device
	[  +0.055115] kauditd_printk_skb: 35 callbacks suppressed
	[Sep 6 18:47] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.353602] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.054157] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.812993] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.713213] kauditd_printk_skb: 10 callbacks suppressed
	[  +9.355781] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.693142] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [0b3d5cea6ad1] <==
	{"level":"info","ts":"2024-09-06T18:45:58.326260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T18:45:58.326345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-06T18:45:58.326387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-06T18:45:58.326404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-06T18:45:58.326431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-06T18:45:58.326465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-06T18:45:58.331553Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T18:45:58.331568Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-152000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T18:45:58.331861Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T18:45:58.334019Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T18:45:58.334269Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T18:45:58.334302Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T18:45:58.335253Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T18:45:58.335792Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T18:45:58.336618Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-06T18:46:27.372616Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-06T18:46:27.372648Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-152000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-06T18:46:27.372694Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T18:46:27.372737Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T18:46:27.379621Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T18:46:27.379645Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T18:46:27.379665Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-06T18:46:27.381273Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-06T18:46:27.381299Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-06T18:46:27.381302Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-152000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [ecadbabfa115] <==
	{"level":"info","ts":"2024-09-06T18:46:42.283530Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:46:42.283562Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:46:42.284764Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T18:46:42.285341Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-06T18:46:42.285428Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-06T18:46:42.285451Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-06T18:46:42.286263Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T18:46:42.286294Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T18:46:44.072081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-06T18:46:44.072235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-06T18:46:44.072304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-06T18:46:44.072739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-06T18:46:44.072791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-06T18:46:44.072827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-06T18:46:44.072849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-06T18:46:44.077748Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-152000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T18:46:44.078067Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T18:46:44.078333Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T18:46:44.078391Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T18:46:44.078431Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T18:46:44.080696Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T18:46:44.080696Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T18:46:44.083025Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-06T18:46:44.084634Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T18:47:29.801781Z","caller":"traceutil/trace.go:171","msg":"trace[1109846973] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"200.877974ms","start":"2024-09-06T18:47:29.600893Z","end":"2024-09-06T18:47:29.801771Z","steps":["trace[1109846973] 'process raft request'  (duration: 200.761843ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:47:52 up 2 min,  0 users,  load average: 0.80, 0.44, 0.18
	Linux functional-152000 5.10.207 #1 SMP PREEMPT Tue Sep 3 18:23:52 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [04fc9bd81509] <==
	I0906 18:46:44.667351       1 policy_source.go:224] refreshing policies
	E0906 18:46:44.668806       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0906 18:46:44.669563       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 18:46:44.679739       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 18:46:44.679758       1 aggregator.go:171] initial CRD sync complete...
	I0906 18:46:44.679763       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 18:46:44.679766       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 18:46:44.679768       1 cache.go:39] Caches are synced for autoregister controller
	I0906 18:46:44.687515       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 18:46:44.687562       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 18:46:44.687518       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 18:46:44.709269       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 18:46:45.570034       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 18:46:46.204400       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0906 18:46:46.209302       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0906 18:46:46.224146       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0906 18:46:46.232156       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 18:46:46.235161       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 18:46:47.958198       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 18:46:48.361262       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 18:47:04.426563       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.190.113"}
	I0906 18:47:09.213825       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0906 18:47:09.255465       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.201.176"}
	I0906 18:47:13.276285       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.61.91"}
	I0906 18:47:22.723620       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.23.122"}
	
	
	==> kube-controller-manager [a196bd54a2db] <==
	I0906 18:46:48.571942       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 18:46:48.657926       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 18:46:48.658120       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0906 18:46:50.216047       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="15.48454ms"
	I0906 18:46:50.216350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="44.509µs"
	I0906 18:47:09.226950       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="12.3767ms"
	I0906 18:47:09.235105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="8.091644ms"
	I0906 18:47:09.238996       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="3.865276ms"
	I0906 18:47:09.239029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="16.381µs"
	I0906 18:47:14.998965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="77.532µs"
	I0906 18:47:15.160971       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-152000"
	I0906 18:47:16.017113       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="22.634µs"
	I0906 18:47:17.017852       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="53.772µs"
	I0906 18:47:22.692509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="8.807636ms"
	I0906 18:47:22.696455       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="3.6934ms"
	I0906 18:47:22.696655       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="40.684µs"
	I0906 18:47:22.697012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="7.878µs"
	I0906 18:47:24.147736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="32.889µs"
	I0906 18:47:25.170476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="40.308µs"
	I0906 18:47:26.171329       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="23.426µs"
	I0906 18:47:30.215533       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="24.26µs"
	I0906 18:47:41.414568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="45.726µs"
	I0906 18:47:43.572560       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="88.576µs"
	I0906 18:47:45.847055       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-152000"
	I0906 18:47:51.569090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="48.561µs"
	
	
	==> kube-controller-manager [acf714636111] <==
	I0906 18:46:02.266217       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0906 18:46:02.296132       1 shared_informer.go:320] Caches are synced for node
	I0906 18:46:02.296237       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0906 18:46:02.296275       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0906 18:46:02.296285       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0906 18:46:02.296288       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0906 18:46:02.296335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-152000"
	I0906 18:46:02.302120       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0906 18:46:02.302165       1 shared_informer.go:320] Caches are synced for endpoint
	I0906 18:46:02.303276       1 shared_informer.go:320] Caches are synced for taint
	I0906 18:46:02.303343       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0906 18:46:02.303402       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-152000"
	I0906 18:46:02.303445       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0906 18:46:02.352417       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0906 18:46:02.357662       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 18:46:02.386832       1 shared_informer.go:320] Caches are synced for daemon sets
	I0906 18:46:02.387767       1 shared_informer.go:320] Caches are synced for stateful set
	I0906 18:46:02.404610       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 18:46:02.456880       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="254.03893ms"
	I0906 18:46:02.456921       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="18.219µs"
	I0906 18:46:02.815670       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 18:46:02.902134       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 18:46:02.902174       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0906 18:46:11.105529       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="10.415919ms"
	I0906 18:46:11.106618       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="43.731µs"
	
	
	==> kube-proxy [28af283a708d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:45:59.682052       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:45:59.689546       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0906 18:45:59.689580       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:45:59.700164       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:45:59.700183       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:45:59.700199       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:45:59.700944       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:45:59.701024       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:45:59.701041       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:45:59.701472       1 config.go:197] "Starting service config controller"
	I0906 18:45:59.701481       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:45:59.701490       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:45:59.701492       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:45:59.701684       1 config.go:326] "Starting node config controller"
	I0906 18:45:59.701686       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:45:59.802152       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:45:59.802162       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:45:59.802170       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [7177fef22eef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:46:46.052354       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:46:46.055610       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0906 18:46:46.055634       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:46:46.063161       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:46:46.063177       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:46:46.063230       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:46:46.063796       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:46:46.063922       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:46:46.063930       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:46:46.064368       1 config.go:197] "Starting service config controller"
	I0906 18:46:46.064379       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:46:46.064388       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:46:46.064405       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:46:46.064606       1 config.go:326] "Starting node config controller"
	I0906 18:46:46.064609       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:46:46.165439       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:46:46.165461       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:46:46.165473       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [15cc40d93868] <==
	I0906 18:45:57.250642       1 serving.go:386] Generated self-signed cert in-memory
	W0906 18:45:58.841744       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 18:45:58.841826       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 18:45:58.841863       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 18:45:58.841881       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 18:45:58.875642       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0906 18:45:58.875922       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:45:58.876879       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0906 18:45:58.877011       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 18:45:58.877873       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 18:45:58.877023       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 18:45:58.979152       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 18:46:27.365003       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0906 18:46:27.365152       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4bbf8f9d0b9c] <==
	I0906 18:46:42.753979       1 serving.go:386] Generated self-signed cert in-memory
	W0906 18:46:44.591406       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 18:46:44.591479       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 18:46:44.591493       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 18:46:44.591501       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 18:46:44.617153       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0906 18:46:44.617174       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:46:44.623851       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0906 18:46:44.624016       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 18:46:44.624036       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 18:46:44.624981       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 18:46:44.728458       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 18:47:40 functional-152000 kubelet[6261]: I0906 18:47:40.541147    6261 scope.go:117] "RemoveContainer" containerID="f461d88ef6edff37335b78ac3351a3d355495eddde7ee4c9605eb483a591ceb0"
	Sep 06 18:47:41 functional-152000 kubelet[6261]: I0906 18:47:41.403449    6261 scope.go:117] "RemoveContainer" containerID="f461d88ef6edff37335b78ac3351a3d355495eddde7ee4c9605eb483a591ceb0"
	Sep 06 18:47:41 functional-152000 kubelet[6261]: I0906 18:47:41.403954    6261 scope.go:117] "RemoveContainer" containerID="9a5b516e22a4724ff292bde504d3aaa9232695b0b953f683a9c223906813535c"
	Sep 06 18:47:41 functional-152000 kubelet[6261]: E0906 18:47:41.405102    6261 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-lpwm6_default(464db546-6f59-437a-a87a-f29fda80c538)\"" pod="default/hello-node-connect-65d86f57f4-lpwm6" podUID="464db546-6f59-437a-a87a-f29fda80c538"
	Sep 06 18:47:41 functional-152000 kubelet[6261]: E0906 18:47:41.545411    6261 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 18:47:41 functional-152000 kubelet[6261]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 18:47:41 functional-152000 kubelet[6261]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 18:47:41 functional-152000 kubelet[6261]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 18:47:41 functional-152000 kubelet[6261]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 18:47:41 functional-152000 kubelet[6261]: I0906 18:47:41.547120    6261 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a09db55-0de3-40bb-af41-3ed24c80b94e" path="/var/lib/kubelet/pods/6a09db55-0de3-40bb-af41-3ed24c80b94e/volumes"
	Sep 06 18:47:41 functional-152000 kubelet[6261]: I0906 18:47:41.623382    6261 scope.go:117] "RemoveContainer" containerID="d45cf0befd3207be49aa909d8875c268a5e0ab1c46986046a9b3fdc29dec5f67"
	Sep 06 18:47:43 functional-152000 kubelet[6261]: I0906 18:47:43.543977    6261 scope.go:117] "RemoveContainer" containerID="3217ee2c51c7b5f2dda5a7ad95e0b5c8bcaa79e4ad6738b1d6dcdb0cb0845015"
	Sep 06 18:47:43 functional-152000 kubelet[6261]: E0906 18:47:43.544349    6261 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-w6kkx_default(04de1b85-2273-4a34-a51b-b995aebd4714)\"" pod="default/hello-node-64b4f8f9ff-w6kkx" podUID="04de1b85-2273-4a34-a51b-b995aebd4714"
	Sep 06 18:47:43 functional-152000 kubelet[6261]: I0906 18:47:43.571231    6261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=3.8423833910000003 podStartE2EDuration="4.571203081s" podCreationTimestamp="2024-09-06 18:47:39 +0000 UTC" firstStartedPulling="2024-09-06 18:47:40.145237501 +0000 UTC m=+58.676297147" lastFinishedPulling="2024-09-06 18:47:40.874057191 +0000 UTC m=+59.405116837" observedRunningTime="2024-09-06 18:47:41.428237407 +0000 UTC m=+59.959297053" watchObservedRunningTime="2024-09-06 18:47:43.571203081 +0000 UTC m=+62.102262727"
	Sep 06 18:47:48 functional-152000 kubelet[6261]: I0906 18:47:48.718983    6261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/45982d61-fa7d-4690-9abf-5f364144eb3d-test-volume\") pod \"busybox-mount\" (UID: \"45982d61-fa7d-4690-9abf-5f364144eb3d\") " pod="default/busybox-mount"
	Sep 06 18:47:48 functional-152000 kubelet[6261]: I0906 18:47:48.719010    6261 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw828\" (UniqueName: \"kubernetes.io/projected/45982d61-fa7d-4690-9abf-5f364144eb3d-kube-api-access-kw828\") pod \"busybox-mount\" (UID: \"45982d61-fa7d-4690-9abf-5f364144eb3d\") " pod="default/busybox-mount"
	Sep 06 18:47:51 functional-152000 kubelet[6261]: I0906 18:47:51.546418    6261 scope.go:117] "RemoveContainer" containerID="9a5b516e22a4724ff292bde504d3aaa9232695b0b953f683a9c223906813535c"
	Sep 06 18:47:51 functional-152000 kubelet[6261]: E0906 18:47:51.547960    6261 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-lpwm6_default(464db546-6f59-437a-a87a-f29fda80c538)\"" pod="default/hello-node-connect-65d86f57f4-lpwm6" podUID="464db546-6f59-437a-a87a-f29fda80c538"
	Sep 06 18:47:51 functional-152000 kubelet[6261]: I0906 18:47:51.743969    6261 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/45982d61-fa7d-4690-9abf-5f364144eb3d-test-volume\") pod \"45982d61-fa7d-4690-9abf-5f364144eb3d\" (UID: \"45982d61-fa7d-4690-9abf-5f364144eb3d\") "
	Sep 06 18:47:51 functional-152000 kubelet[6261]: I0906 18:47:51.744005    6261 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw828\" (UniqueName: \"kubernetes.io/projected/45982d61-fa7d-4690-9abf-5f364144eb3d-kube-api-access-kw828\") pod \"45982d61-fa7d-4690-9abf-5f364144eb3d\" (UID: \"45982d61-fa7d-4690-9abf-5f364144eb3d\") "
	Sep 06 18:47:51 functional-152000 kubelet[6261]: I0906 18:47:51.744072    6261 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45982d61-fa7d-4690-9abf-5f364144eb3d-test-volume" (OuterVolumeSpecName: "test-volume") pod "45982d61-fa7d-4690-9abf-5f364144eb3d" (UID: "45982d61-fa7d-4690-9abf-5f364144eb3d"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 06 18:47:51 functional-152000 kubelet[6261]: I0906 18:47:51.746668    6261 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45982d61-fa7d-4690-9abf-5f364144eb3d-kube-api-access-kw828" (OuterVolumeSpecName: "kube-api-access-kw828") pod "45982d61-fa7d-4690-9abf-5f364144eb3d" (UID: "45982d61-fa7d-4690-9abf-5f364144eb3d"). InnerVolumeSpecName "kube-api-access-kw828". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:47:51 functional-152000 kubelet[6261]: I0906 18:47:51.844111    6261 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kw828\" (UniqueName: \"kubernetes.io/projected/45982d61-fa7d-4690-9abf-5f364144eb3d-kube-api-access-kw828\") on node \"functional-152000\" DevicePath \"\""
	Sep 06 18:47:51 functional-152000 kubelet[6261]: I0906 18:47:51.844129    6261 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/45982d61-fa7d-4690-9abf-5f364144eb3d-test-volume\") on node \"functional-152000\" DevicePath \"\""
	Sep 06 18:47:52 functional-152000 kubelet[6261]: I0906 18:47:52.600746    6261 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3c6797b1ebe043628fdf7e835e15e633bf4721494ea78755a097485afe3be21"
	
	
	==> storage-provisioner [5c51321ca997] <==
	I0906 18:46:13.250015       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:46:13.253806       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:46:13.253823       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [952bddde03c9] <==
	I0906 18:46:46.035333       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:46:46.041133       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:46:46.041152       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 18:47:03.443325       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 18:47:03.443397       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-152000_9e659afe-50f7-47f7-8071-b2af94ec7649!
	I0906 18:47:03.443423       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99bbf44a-3bdb-411f-b1e8-502f917b237d", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-152000_9e659afe-50f7-47f7-8071-b2af94ec7649 became leader
	I0906 18:47:03.543755       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-152000_9e659afe-50f7-47f7-8071-b2af94ec7649!
	I0906 18:47:25.923526       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0906 18:47:25.923670       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    258e3dc5-e97c-4b9d-9571-7085990192f6 327 0 2024-09-06 18:45:32 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-06 18:45:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-77e04ae2-8faf-4eb4-a806-230d02423eae &PersistentVolumeClaim{ObjectMeta:{myclaim  default  77e04ae2-8faf-4eb4-a806-230d02423eae 732 0 2024-09-06 18:47:25 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-06 18:47:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-06 18:47:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0906 18:47:25.925007       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-77e04ae2-8faf-4eb4-a806-230d02423eae" provisioned
	I0906 18:47:25.925071       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0906 18:47:25.925096       1 volume_store.go:212] Trying to save persistentvolume "pvc-77e04ae2-8faf-4eb4-a806-230d02423eae"
	I0906 18:47:25.924353       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"77e04ae2-8faf-4eb4-a806-230d02423eae", APIVersion:"v1", ResourceVersion:"732", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0906 18:47:25.930234       1 volume_store.go:219] persistentvolume "pvc-77e04ae2-8faf-4eb4-a806-230d02423eae" saved
	I0906 18:47:25.931016       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"77e04ae2-8faf-4eb4-a806-230d02423eae", APIVersion:"v1", ResourceVersion:"732", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-77e04ae2-8faf-4eb4-a806-230d02423eae
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-152000 -n functional-152000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-152000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-152000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-152000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-152000/192.168.105.4
	Start Time:       Fri, 06 Sep 2024 11:47:48 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://a3fa6fda52aa481f6da9f74cd8fc779e0b100865b556c5585b126107ad0c2208
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 06 Sep 2024 11:47:50 -0700
	      Finished:     Fri, 06 Sep 2024 11:47:50 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kw828 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kw828:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/busybox-mount to functional-152000
	  Normal  Pulling    4s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.363s (1.363s including waiting). Image size: 3547125 bytes.
	  Normal  Created    2s    kubelet            Created container mount-munger
	  Normal  Started    2s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (30.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-darwin-arm64 license: exit status 40 (147.219833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 node stop m02 -v=7 --alsologtostderr
E0906 11:52:09.141351    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:09.148969    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:09.162316    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:09.185675    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:09.229019    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:09.311079    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:09.474541    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:09.797229    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:10.440641    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:11.723656    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:14.287053    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-001000 node stop m02 -v=7 --alsologtostderr: (12.192282s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr
E0906 11:52:19.410544    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:22.273859    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:29.654027    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:49.994139    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:52:50.137257    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:53:31.100229    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:54:53.022765    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr: exit status 7 (3m45.051362041s)

                                                
                                                
-- stdout --
	ha-001000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-001000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-001000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-001000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 11:52:18.225988    4710 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:52:18.226134    4710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:52:18.226138    4710 out.go:358] Setting ErrFile to fd 2...
	I0906 11:52:18.226140    4710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:52:18.226275    4710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 11:52:18.226396    4710 out.go:352] Setting JSON to false
	I0906 11:52:18.226409    4710 mustload.go:65] Loading cluster: ha-001000
	I0906 11:52:18.226442    4710 notify.go:220] Checking for updates...
	I0906 11:52:18.226631    4710 config.go:182] Loaded profile config "ha-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:52:18.226639    4710 status.go:255] checking status of ha-001000 ...
	I0906 11:52:18.227293    4710 status.go:330] ha-001000 host status = "Running" (err=<nil>)
	I0906 11:52:18.227300    4710 host.go:66] Checking if "ha-001000" exists ...
	I0906 11:52:18.227392    4710 host.go:66] Checking if "ha-001000" exists ...
	I0906 11:52:18.227509    4710 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 11:52:18.227516    4710 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000/id_rsa Username:docker}
	W0906 11:53:33.228725    4710 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0906 11:53:33.228800    4710 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0906 11:53:33.228810    4710 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0906 11:53:33.228828    4710 status.go:257] ha-001000 status: &{Name:ha-001000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 11:53:33.228838    4710 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0906 11:53:33.228843    4710 status.go:255] checking status of ha-001000-m02 ...
	I0906 11:53:33.229056    4710 status.go:330] ha-001000-m02 host status = "Stopped" (err=<nil>)
	I0906 11:53:33.229062    4710 status.go:343] host is not running, skipping remaining checks
	I0906 11:53:33.229065    4710 status.go:257] ha-001000-m02 status: &{Name:ha-001000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 11:53:33.229070    4710 status.go:255] checking status of ha-001000-m03 ...
	I0906 11:53:33.229817    4710 status.go:330] ha-001000-m03 host status = "Running" (err=<nil>)
	I0906 11:53:33.229823    4710 host.go:66] Checking if "ha-001000-m03" exists ...
	I0906 11:53:33.229929    4710 host.go:66] Checking if "ha-001000-m03" exists ...
	I0906 11:53:33.231734    4710 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 11:53:33.231743    4710 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m03/id_rsa Username:docker}
	W0906 11:54:48.232250    4710 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0906 11:54:48.232297    4710 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0906 11:54:48.232306    4710 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0906 11:54:48.232310    4710 status.go:257] ha-001000-m03 status: &{Name:ha-001000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 11:54:48.232320    4710 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0906 11:54:48.232324    4710 status.go:255] checking status of ha-001000-m04 ...
	I0906 11:54:48.232979    4710 status.go:330] ha-001000-m04 host status = "Running" (err=<nil>)
	I0906 11:54:48.232987    4710 host.go:66] Checking if "ha-001000-m04" exists ...
	I0906 11:54:48.233107    4710 host.go:66] Checking if "ha-001000-m04" exists ...
	I0906 11:54:48.233218    4710 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 11:54:48.233224    4710 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m04/id_rsa Username:docker}
	W0906 11:56:03.234964    4710 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0906 11:56:03.235118    4710 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0906 11:56:03.235145    4710 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0906 11:56:03.235160    4710 status.go:257] ha-001000-m04 status: &{Name:ha-001000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0906 11:56:03.235200    4710 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr": ha-001000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-001000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-001000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-001000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr": ha-001000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-001000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-001000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-001000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr": ha-001000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-001000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-001000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-001000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000
E0906 11:57:09.137586    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000: exit status 3 (1m15.068531334s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 11:57:18.302555    4719 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0906 11:57:18.302594    4719 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-001000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0906 11:57:22.268919    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:57:36.864124    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.101675542s)
ha_test.go:413: expected profile "ha-001000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-001000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-001000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"Disable
DriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-001000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"
docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metric
s-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOpti
ons\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000: exit status 3 (1m15.042406417s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 12:01:03.440240    4740 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0906 12:01:03.440288    4740 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-001000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-001000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.134981375s)

                                                
                                                
-- stdout --
	* Starting "ha-001000-m02" control-plane node in "ha-001000" cluster
	* Restarting existing qemu2 VM for "ha-001000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-001000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:01:03.515240    5048 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:01:03.515690    5048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:01:03.515695    5048 out.go:358] Setting ErrFile to fd 2...
	I0906 12:01:03.515699    5048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:01:03.515862    5048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:01:03.516116    5048 mustload.go:65] Loading cluster: ha-001000
	I0906 12:01:03.516385    5048 config.go:182] Loaded profile config "ha-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0906 12:01:03.516645    5048 host.go:58] "ha-001000-m02" host status: Stopped
	I0906 12:01:03.520164    5048 out.go:177] * Starting "ha-001000-m02" control-plane node in "ha-001000" cluster
	I0906 12:01:03.524185    5048 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:01:03.524199    5048 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:01:03.524208    5048 cache.go:56] Caching tarball of preloaded images
	I0906 12:01:03.524296    5048 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:01:03.524303    5048 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:01:03.524367    5048 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/ha-001000/config.json ...
	I0906 12:01:03.525032    5048 start.go:360] acquireMachinesLock for ha-001000-m02: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:01:03.525085    5048 start.go:364] duration metric: took 37.625µs to acquireMachinesLock for "ha-001000-m02"
	I0906 12:01:03.525097    5048 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:01:03.525102    5048 fix.go:54] fixHost starting: m02
	I0906 12:01:03.525263    5048 fix.go:112] recreateIfNeeded on ha-001000-m02: state=Stopped err=<nil>
	W0906 12:01:03.525270    5048 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:01:03.529144    5048 out.go:177] * Restarting existing qemu2 VM for "ha-001000-m02" ...
	I0906 12:01:03.533120    5048 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:01:03.533158    5048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:ba:30:71:f8:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/disk.qcow2
	I0906 12:01:03.535461    5048 main.go:141] libmachine: STDOUT: 
	I0906 12:01:03.535479    5048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:01:03.535501    5048 fix.go:56] duration metric: took 10.403334ms for fixHost
	I0906 12:01:03.535506    5048 start.go:83] releasing machines lock for "ha-001000-m02", held for 10.418208ms
	W0906 12:01:03.535511    5048 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:01:03.535542    5048 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:01:03.535546    5048 start.go:729] Will try again in 5 seconds ...
	I0906 12:01:08.536639    5048 start.go:360] acquireMachinesLock for ha-001000-m02: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:01:08.537046    5048 start.go:364] duration metric: took 346µs to acquireMachinesLock for "ha-001000-m02"
	I0906 12:01:08.537189    5048 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:01:08.537207    5048 fix.go:54] fixHost starting: m02
	I0906 12:01:08.537939    5048 fix.go:112] recreateIfNeeded on ha-001000-m02: state=Stopped err=<nil>
	W0906 12:01:08.537958    5048 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:01:08.541844    5048 out.go:177] * Restarting existing qemu2 VM for "ha-001000-m02" ...
	I0906 12:01:08.545897    5048 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:01:08.546092    5048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:ba:30:71:f8:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/disk.qcow2
	I0906 12:01:08.554284    5048 main.go:141] libmachine: STDOUT: 
	I0906 12:01:08.554340    5048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:01:08.554406    5048 fix.go:56] duration metric: took 17.204959ms for fixHost
	I0906 12:01:08.554428    5048 start.go:83] releasing machines lock for "ha-001000-m02", held for 17.36475ms
	W0906 12:01:08.554656    5048 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-001000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-001000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:01:08.558667    5048 out.go:201] 
	W0906 12:01:08.562905    5048 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:01:08.562928    5048 out.go:270] * 
	* 
	W0906 12:01:08.569345    5048 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:01:08.573938    5048 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0906 12:01:03.515240    5048 out.go:345] Setting OutFile to fd 1 ...
I0906 12:01:03.515690    5048 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 12:01:03.515695    5048 out.go:358] Setting ErrFile to fd 2...
I0906 12:01:03.515699    5048 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 12:01:03.515862    5048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
I0906 12:01:03.516116    5048 mustload.go:65] Loading cluster: ha-001000
I0906 12:01:03.516385    5048 config.go:182] Loaded profile config "ha-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0906 12:01:03.516645    5048 host.go:58] "ha-001000-m02" host status: Stopped
I0906 12:01:03.520164    5048 out.go:177] * Starting "ha-001000-m02" control-plane node in "ha-001000" cluster
I0906 12:01:03.524185    5048 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0906 12:01:03.524199    5048 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0906 12:01:03.524208    5048 cache.go:56] Caching tarball of preloaded images
I0906 12:01:03.524296    5048 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0906 12:01:03.524303    5048 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0906 12:01:03.524367    5048 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/ha-001000/config.json ...
I0906 12:01:03.525032    5048 start.go:360] acquireMachinesLock for ha-001000-m02: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0906 12:01:03.525085    5048 start.go:364] duration metric: took 37.625µs to acquireMachinesLock for "ha-001000-m02"
I0906 12:01:03.525097    5048 start.go:96] Skipping create...Using existing machine configuration
I0906 12:01:03.525102    5048 fix.go:54] fixHost starting: m02
I0906 12:01:03.525263    5048 fix.go:112] recreateIfNeeded on ha-001000-m02: state=Stopped err=<nil>
W0906 12:01:03.525270    5048 fix.go:138] unexpected machine state, will restart: <nil>
I0906 12:01:03.529144    5048 out.go:177] * Restarting existing qemu2 VM for "ha-001000-m02" ...
I0906 12:01:03.533120    5048 qemu.go:418] Using hvf for hardware acceleration
I0906 12:01:03.533158    5048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:ba:30:71:f8:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/disk.qcow2
I0906 12:01:03.535461    5048 main.go:141] libmachine: STDOUT: 
I0906 12:01:03.535479    5048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0906 12:01:03.535501    5048 fix.go:56] duration metric: took 10.403334ms for fixHost
I0906 12:01:03.535506    5048 start.go:83] releasing machines lock for "ha-001000-m02", held for 10.418208ms
W0906 12:01:03.535511    5048 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0906 12:01:03.535542    5048 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0906 12:01:03.535546    5048 start.go:729] Will try again in 5 seconds ...
I0906 12:01:08.536639    5048 start.go:360] acquireMachinesLock for ha-001000-m02: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0906 12:01:08.537046    5048 start.go:364] duration metric: took 346µs to acquireMachinesLock for "ha-001000-m02"
I0906 12:01:08.537189    5048 start.go:96] Skipping create...Using existing machine configuration
I0906 12:01:08.537207    5048 fix.go:54] fixHost starting: m02
I0906 12:01:08.537939    5048 fix.go:112] recreateIfNeeded on ha-001000-m02: state=Stopped err=<nil>
W0906 12:01:08.537958    5048 fix.go:138] unexpected machine state, will restart: <nil>
I0906 12:01:08.541844    5048 out.go:177] * Restarting existing qemu2 VM for "ha-001000-m02" ...
I0906 12:01:08.545897    5048 qemu.go:418] Using hvf for hardware acceleration
I0906 12:01:08.546092    5048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:ba:30:71:f8:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/disk.qcow2
I0906 12:01:08.554284    5048 main.go:141] libmachine: STDOUT: 
I0906 12:01:08.554340    5048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0906 12:01:08.554406    5048 fix.go:56] duration metric: took 17.204959ms for fixHost
I0906 12:01:08.554428    5048 start.go:83] releasing machines lock for "ha-001000-m02", held for 17.36475ms
W0906 12:01:08.554656    5048 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-001000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-001000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0906 12:01:08.558667    5048 out.go:201] 
W0906 12:01:08.562905    5048 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0906 12:01:08.562928    5048 out.go:270] * 
* 
W0906 12:01:08.569345    5048 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0906 12:01:08.573938    5048 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-001000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr
E0906 12:02:09.125286    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:02:22.257812    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:03:45.339112    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr: exit status 7 (3m45.06860625s)

                                                
                                                
-- stdout --
	ha-001000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-001000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-001000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-001000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:01:08.636509    5052 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:01:08.636687    5052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:01:08.636692    5052 out.go:358] Setting ErrFile to fd 2...
	I0906 12:01:08.636695    5052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:01:08.636841    5052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:01:08.636983    5052 out.go:352] Setting JSON to false
	I0906 12:01:08.636997    5052 mustload.go:65] Loading cluster: ha-001000
	I0906 12:01:08.637041    5052 notify.go:220] Checking for updates...
	I0906 12:01:08.637258    5052 config.go:182] Loaded profile config "ha-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:01:08.637266    5052 status.go:255] checking status of ha-001000 ...
	I0906 12:01:08.638125    5052 status.go:330] ha-001000 host status = "Running" (err=<nil>)
	I0906 12:01:08.638135    5052 host.go:66] Checking if "ha-001000" exists ...
	I0906 12:01:08.638286    5052 host.go:66] Checking if "ha-001000" exists ...
	I0906 12:01:08.638423    5052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:01:08.638432    5052 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000/id_rsa Username:docker}
	W0906 12:02:23.636276    5052 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0906 12:02:23.636334    5052 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0906 12:02:23.636343    5052 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0906 12:02:23.636347    5052 status.go:257] ha-001000 status: &{Name:ha-001000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 12:02:23.636355    5052 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0906 12:02:23.636359    5052 status.go:255] checking status of ha-001000-m02 ...
	I0906 12:02:23.636544    5052 status.go:330] ha-001000-m02 host status = "Stopped" (err=<nil>)
	I0906 12:02:23.636549    5052 status.go:343] host is not running, skipping remaining checks
	I0906 12:02:23.636551    5052 status.go:257] ha-001000-m02 status: &{Name:ha-001000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:02:23.636555    5052 status.go:255] checking status of ha-001000-m03 ...
	I0906 12:02:23.637110    5052 status.go:330] ha-001000-m03 host status = "Running" (err=<nil>)
	I0906 12:02:23.637122    5052 host.go:66] Checking if "ha-001000-m03" exists ...
	I0906 12:02:23.637226    5052 host.go:66] Checking if "ha-001000-m03" exists ...
	I0906 12:02:23.637337    5052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:02:23.637343    5052 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m03/id_rsa Username:docker}
	W0906 12:03:38.638101    5052 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0906 12:03:38.638249    5052 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0906 12:03:38.638280    5052 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0906 12:03:38.638294    5052 status.go:257] ha-001000-m03 status: &{Name:ha-001000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 12:03:38.638324    5052 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0906 12:03:38.638342    5052 status.go:255] checking status of ha-001000-m04 ...
	I0906 12:03:38.640315    5052 status.go:330] ha-001000-m04 host status = "Running" (err=<nil>)
	I0906 12:03:38.640335    5052 host.go:66] Checking if "ha-001000-m04" exists ...
	I0906 12:03:38.640635    5052 host.go:66] Checking if "ha-001000-m04" exists ...
	I0906 12:03:38.641002    5052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:03:38.641025    5052 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m04/id_rsa Username:docker}
	W0906 12:04:53.641850    5052 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0906 12:04:53.641904    5052 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0906 12:04:53.641914    5052 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0906 12:04:53.641918    5052 status.go:257] ha-001000-m04 status: &{Name:ha-001000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0906 12:04:53.641928    5052 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000: exit status 3 (1m15.041424334s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 12:06:08.680718    5065 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0906 12:06:08.680726    5065 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-001000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-001000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-001000 -v=7 --alsologtostderr
E0906 12:12:09.116585    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:12:22.248998    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-001000 -v=7 --alsologtostderr: (5m27.174782792s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-001000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-001000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.224415208s)

                                                
                                                
-- stdout --
	* [ha-001000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-001000" primary control-plane node in "ha-001000" cluster
	* Restarting existing qemu2 VM for "ha-001000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-001000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:14:06.057024    5129 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:14:06.057242    5129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:06.057246    5129 out.go:358] Setting ErrFile to fd 2...
	I0906 12:14:06.057249    5129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:06.057423    5129 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:14:06.058810    5129 out.go:352] Setting JSON to false
	I0906 12:14:06.079604    5129 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4416,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:14:06.079673    5129 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:14:06.085051    5129 out.go:177] * [ha-001000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:14:06.093064    5129 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:14:06.093111    5129 notify.go:220] Checking for updates...
	I0906 12:14:06.101009    5129 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:14:06.103918    5129 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:14:06.106953    5129 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:14:06.109986    5129 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:14:06.112881    5129 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:14:06.116347    5129 config.go:182] Loaded profile config "ha-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:14:06.116398    5129 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:14:06.120950    5129 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:14:06.127996    5129 start.go:297] selected driver: qemu2
	I0906 12:14:06.128003    5129 start.go:901] validating driver "qemu2" against &{Name:ha-001000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-001000 Namespace:default
APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:14:06.128092    5129 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:14:06.130999    5129 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:14:06.131025    5129 cni.go:84] Creating CNI manager for ""
	I0906 12:14:06.131031    5129 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:14:06.131085    5129 start.go:340] cluster config:
	{Name:ha-001000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-001000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:14:06.135678    5129 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:14:06.143939    5129 out.go:177] * Starting "ha-001000" primary control-plane node in "ha-001000" cluster
	I0906 12:14:06.147992    5129 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:14:06.148013    5129 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:14:06.148022    5129 cache.go:56] Caching tarball of preloaded images
	I0906 12:14:06.148083    5129 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:14:06.148088    5129 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:14:06.148157    5129 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/ha-001000/config.json ...
	I0906 12:14:06.148603    5129 start.go:360] acquireMachinesLock for ha-001000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:14:06.148638    5129 start.go:364] duration metric: took 28.667µs to acquireMachinesLock for "ha-001000"
	I0906 12:14:06.148649    5129 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:14:06.148656    5129 fix.go:54] fixHost starting: 
	I0906 12:14:06.148781    5129 fix.go:112] recreateIfNeeded on ha-001000: state=Stopped err=<nil>
	W0906 12:14:06.148789    5129 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:14:06.152955    5129 out.go:177] * Restarting existing qemu2 VM for "ha-001000" ...
	I0906 12:14:06.160825    5129 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:14:06.160870    5129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:68:d8:0d:8e:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000/disk.qcow2
	I0906 12:14:06.163138    5129 main.go:141] libmachine: STDOUT: 
	I0906 12:14:06.163163    5129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:14:06.163191    5129 fix.go:56] duration metric: took 14.537166ms for fixHost
	I0906 12:14:06.163197    5129 start.go:83] releasing machines lock for "ha-001000", held for 14.554125ms
	W0906 12:14:06.163202    5129 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:14:06.163232    5129 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:14:06.163237    5129 start.go:729] Will try again in 5 seconds ...
	I0906 12:14:11.165343    5129 start.go:360] acquireMachinesLock for ha-001000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:14:11.165770    5129 start.go:364] duration metric: took 345.708µs to acquireMachinesLock for "ha-001000"
	I0906 12:14:11.165905    5129 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:14:11.165923    5129 fix.go:54] fixHost starting: 
	I0906 12:14:11.166618    5129 fix.go:112] recreateIfNeeded on ha-001000: state=Stopped err=<nil>
	W0906 12:14:11.166642    5129 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:14:11.170207    5129 out.go:177] * Restarting existing qemu2 VM for "ha-001000" ...
	I0906 12:14:11.178211    5129 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:14:11.178372    5129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:68:d8:0d:8e:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000/disk.qcow2
	I0906 12:14:11.186061    5129 main.go:141] libmachine: STDOUT: 
	I0906 12:14:11.186160    5129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:14:11.186220    5129 fix.go:56] duration metric: took 20.298125ms for fixHost
	I0906 12:14:11.186241    5129 start.go:83] releasing machines lock for "ha-001000", held for 20.447417ms
	W0906 12:14:11.186451    5129 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-001000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-001000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:14:11.194120    5129 out.go:201] 
	W0906 12:14:11.198057    5129 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:14:11.198090    5129 out.go:270] * 
	* 
	W0906 12:14:11.200098    5129 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:14:11.208077    5129 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-001000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-001000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000: exit status 7 (32.961291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-001000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-001000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.472125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-001000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-001000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:14:11.343327    5144 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:14:11.343607    5144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:11.343610    5144 out.go:358] Setting ErrFile to fd 2...
	I0906 12:14:11.343613    5144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:11.343732    5144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:14:11.343934    5144 mustload.go:65] Loading cluster: ha-001000
	I0906 12:14:11.344166    5144 config.go:182] Loaded profile config "ha-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0906 12:14:11.344463    5144 out.go:270] ! The control-plane node ha-001000 host is not running (will try others): state=Stopped
	! The control-plane node ha-001000 host is not running (will try others): state=Stopped
	W0906 12:14:11.344571    5144 out.go:270] ! The control-plane node ha-001000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-001000-m02 host is not running (will try others): state=Stopped
	I0906 12:14:11.348755    5144 out.go:177] * The control-plane node ha-001000-m03 host is not running: state=Stopped
	I0906 12:14:11.352683    5144 out.go:177]   To start a cluster, run: "minikube start -p ha-001000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-001000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr: exit status 7 (30.273958ms)

                                                
                                                
-- stdout --
	ha-001000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-001000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-001000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-001000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:14:11.382817    5146 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:14:11.382950    5146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:11.382954    5146 out.go:358] Setting ErrFile to fd 2...
	I0906 12:14:11.382956    5146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:11.383077    5146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:14:11.383189    5146 out.go:352] Setting JSON to false
	I0906 12:14:11.383199    5146 mustload.go:65] Loading cluster: ha-001000
	I0906 12:14:11.383266    5146 notify.go:220] Checking for updates...
	I0906 12:14:11.383422    5146 config.go:182] Loaded profile config "ha-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:14:11.383429    5146 status.go:255] checking status of ha-001000 ...
	I0906 12:14:11.383645    5146 status.go:330] ha-001000 host status = "Stopped" (err=<nil>)
	I0906 12:14:11.383650    5146 status.go:343] host is not running, skipping remaining checks
	I0906 12:14:11.383652    5146 status.go:257] ha-001000 status: &{Name:ha-001000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:14:11.383662    5146 status.go:255] checking status of ha-001000-m02 ...
	I0906 12:14:11.383748    5146 status.go:330] ha-001000-m02 host status = "Stopped" (err=<nil>)
	I0906 12:14:11.383751    5146 status.go:343] host is not running, skipping remaining checks
	I0906 12:14:11.383753    5146 status.go:257] ha-001000-m02 status: &{Name:ha-001000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:14:11.383757    5146 status.go:255] checking status of ha-001000-m03 ...
	I0906 12:14:11.383849    5146 status.go:330] ha-001000-m03 host status = "Stopped" (err=<nil>)
	I0906 12:14:11.383851    5146 status.go:343] host is not running, skipping remaining checks
	I0906 12:14:11.383854    5146 status.go:257] ha-001000-m03 status: &{Name:ha-001000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:14:11.383858    5146 status.go:255] checking status of ha-001000-m04 ...
	I0906 12:14:11.383949    5146 status.go:330] ha-001000-m04 host status = "Stopped" (err=<nil>)
	I0906 12:14:11.383951    5146 status.go:343] host is not running, skipping remaining checks
	I0906 12:14:11.383953    5146 status.go:257] ha-001000-m04 status: &{Name:ha-001000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000: exit status 7 (30.56575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-001000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-001000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-001000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-001000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"Disable
DriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-001000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"
docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"
metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"Mou
ntOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000: exit status 7 (30.985375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-001000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (231.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 stop -v=7 --alsologtostderr
E0906 12:17:09.142710    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:17:22.275576    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-001000 stop -v=7 --alsologtostderr: signal: killed (3m51.603769834s)

                                                
                                                
-- stdout --
	* Stopping node "ha-001000-m04"  ...
	* Stopping node "ha-001000-m03"  ...
	* Stopping node "ha-001000-m02"  ...
	* Stopping node "ha-001000"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:14:11.524106    5155 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:14:11.524235    5155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:11.524238    5155 out.go:358] Setting ErrFile to fd 2...
	I0906 12:14:11.524241    5155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:14:11.524370    5155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:14:11.524577    5155 out.go:352] Setting JSON to false
	I0906 12:14:11.524672    5155 mustload.go:65] Loading cluster: ha-001000
	I0906 12:14:11.524883    5155 config.go:182] Loaded profile config "ha-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:14:11.524940    5155 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/ha-001000/config.json ...
	I0906 12:14:11.525374    5155 mustload.go:65] Loading cluster: ha-001000
	I0906 12:14:11.525460    5155 config.go:182] Loaded profile config "ha-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:14:11.525488    5155 stop.go:39] StopHost: ha-001000-m04
	I0906 12:14:11.528668    5155 out.go:177] * Stopping node "ha-001000-m04"  ...
	I0906 12:14:11.535681    5155 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 12:14:11.535720    5155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 12:14:11.535729    5155 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m04/id_rsa Username:docker}
	W0906 12:15:26.536514    5155 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0906 12:15:26.536893    5155 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0906 12:15:26.537060    5155 main.go:141] libmachine: Stopping "ha-001000-m04"...
	I0906 12:15:26.537234    5155 stop.go:66] stop err: Machine "ha-001000-m04" is already stopped.
	I0906 12:15:26.537263    5155 stop.go:69] host is already stopped
	I0906 12:15:26.537288    5155 stop.go:39] StopHost: ha-001000-m03
	I0906 12:15:26.541806    5155 out.go:177] * Stopping node "ha-001000-m03"  ...
	I0906 12:15:26.548696    5155 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 12:15:26.548867    5155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 12:15:26.548898    5155 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m03/id_rsa Username:docker}
	W0906 12:16:41.578633    5155 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0906 12:16:41.578858    5155 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0906 12:16:41.579009    5155 main.go:141] libmachine: Stopping "ha-001000-m03"...
	I0906 12:16:41.579157    5155 stop.go:66] stop err: Machine "ha-001000-m03" is already stopped.
	I0906 12:16:41.579183    5155 stop.go:69] host is already stopped
	I0906 12:16:41.579240    5155 stop.go:39] StopHost: ha-001000-m02
	I0906 12:16:41.587916    5155 out.go:177] * Stopping node "ha-001000-m02"  ...
	I0906 12:16:41.591850    5155 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 12:16:41.591995    5155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 12:16:41.592027    5155 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000-m02/id_rsa Username:docker}
	W0906 12:17:56.596503    5155 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.6:22: connect: operation timed out
	W0906 12:17:56.596700    5155 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.6:22: connect: operation timed out
	I0906 12:17:56.596763    5155 main.go:141] libmachine: Stopping "ha-001000-m02"...
	I0906 12:17:56.596927    5155 stop.go:66] stop err: Machine "ha-001000-m02" is already stopped.
	I0906 12:17:56.596956    5155 stop.go:69] host is already stopped
	I0906 12:17:56.596981    5155 stop.go:39] StopHost: ha-001000
	I0906 12:17:56.605322    5155 out.go:177] * Stopping node "ha-001000"  ...
	I0906 12:17:56.609286    5155 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 12:17:56.609436    5155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 12:17:56.609474    5155 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/ha-001000/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-001000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr: context deadline exceeded (2.375µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-001000 -n ha-001000: exit status 7 (73.205333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-001000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (231.68s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-704000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-704000 --driver=qemu2 : exit status 80 (10.005132375s)

                                                
                                                
-- stdout --
	* [image-704000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-704000" primary control-plane node in "image-704000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-704000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-704000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-704000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-704000 -n image-704000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-704000 -n image-704000: exit status 7 (67.597084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-704000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-013000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-013000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.859813458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1f0cf601-e3d1-4083-8cfc-12ba1a338457","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-013000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a646d06-2bb4-4092-87e5-7d37a1c7a9c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19576"}}
	{"specversion":"1.0","id":"b538a7fa-f6ab-46ec-9de8-9fc466d17112","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig"}}
	{"specversion":"1.0","id":"d2f7eaf5-78f2-483e-9850-acc58e1d5a45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1216c1e8-6982-48a8-a4c0-5ff5689337a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"77167ef7-f5e2-4183-a4a3-cb6c48f5f62c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube"}}
	{"specversion":"1.0","id":"83a51fe1-eaf9-41f4-b503-4569de79981b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e7754e23-e7b8-478d-979e-fd6d3a058b8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"231d1224-ff3c-45bb-a915-294b6226e4b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4f7f4d80-aa6f-4257-b243-a0c802c93e81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-013000\" primary control-plane node in \"json-output-013000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"98a1c09e-24d3-4867-a9b0-7052e23a04fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f5a8f1b9-4c8f-49af-85b1-faa102890200","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-013000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4144279-9e8b-4bdb-928e-06c7708568c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"0dd1188e-6958-4437-9038-a826587f8134","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"d125f461-4489-4e0f-8c82-ed4cc265fd41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-013000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"3c059a77-5b36-4833-b8b1-7661fdcc3691","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"a1b5964d-e230-4cc8-a528-17116b0496ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-013000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.86s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-013000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-013000 --output=json --user=testUser: exit status 83 (77.023792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f5d8a2ae-acfc-4fd6-8a75-467676612be7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-013000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"b47ab3b5-2e51-435f-92c2-ba27899cc857","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-013000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-013000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-013000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-013000 --output=json --user=testUser: exit status 83 (42.070083ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-013000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-013000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-013000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-013000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-488000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-488000 --driver=qemu2 : exit status 80 (9.841482917s)

                                                
                                                
-- stdout --
	* [first-488000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-488000" primary control-plane node in "first-488000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-488000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-488000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-488000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-09-06 12:18:37.243462 -0700 PDT m=+3001.859730251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-490000 -n second-490000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-490000 -n second-490000: exit status 85 (80.827584ms)

                                                
                                                
-- stdout --
	* Profile "second-490000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-490000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-490000" host is not running, skipping log retrieval (state="* Profile \"second-490000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-490000\"")
helpers_test.go:175: Cleaning up "second-490000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-490000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-09-06 12:18:37.436374 -0700 PDT m=+3002.052643459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-488000 -n first-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-488000 -n first-488000: exit status 7 (30.011709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-488000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-488000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-488000
--- FAIL: TestMinikubeProfile (10.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-377000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-377000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.878915333s)

                                                
                                                
-- stdout --
	* [mount-start-1-377000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-377000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-377000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-377000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-377000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-377000 -n mount-start-1-377000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-377000 -n mount-start-1-377000: exit status 7 (68.207375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-377000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-009000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-009000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.887631667s)

                                                
                                                
-- stdout --
	* [multinode-009000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-009000" primary control-plane node in "multinode-009000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:18:47.704856    5380 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:18:47.704973    5380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:18:47.704977    5380 out.go:358] Setting ErrFile to fd 2...
	I0906 12:18:47.704979    5380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:18:47.705114    5380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:18:47.706183    5380 out.go:352] Setting JSON to false
	I0906 12:18:47.722422    5380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4697,"bootTime":1725645630,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:18:47.722492    5380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:18:47.728248    5380 out.go:177] * [multinode-009000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:18:47.732204    5380 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:18:47.732247    5380 notify.go:220] Checking for updates...
	I0906 12:18:47.741235    5380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:18:47.744312    5380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:18:47.747280    5380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:18:47.750132    5380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:18:47.753225    5380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:18:47.756478    5380 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:18:47.761115    5380 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:18:47.768226    5380 start.go:297] selected driver: qemu2
	I0906 12:18:47.768236    5380 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:18:47.768255    5380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:18:47.770659    5380 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:18:47.775211    5380 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:18:47.778333    5380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:18:47.778362    5380 cni.go:84] Creating CNI manager for ""
	I0906 12:18:47.778369    5380 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0906 12:18:47.778373    5380 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 12:18:47.778422    5380 start.go:340] cluster config:
	{Name:multinode-009000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPa
th:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:18:47.782179    5380 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:18:47.789180    5380 out.go:177] * Starting "multinode-009000" primary control-plane node in "multinode-009000" cluster
	I0906 12:18:47.793211    5380 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:18:47.793229    5380 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:18:47.793240    5380 cache.go:56] Caching tarball of preloaded images
	I0906 12:18:47.793318    5380 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:18:47.793325    5380 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:18:47.793546    5380 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/multinode-009000/config.json ...
	I0906 12:18:47.793559    5380 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/multinode-009000/config.json: {Name:mkb0cf2d6d631cce9fcdcec4b8cacefefad82375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:18:47.793931    5380 start.go:360] acquireMachinesLock for multinode-009000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:18:47.793972    5380 start.go:364] duration metric: took 32.875µs to acquireMachinesLock for "multinode-009000"
	I0906 12:18:47.793986    5380 start.go:93] Provisioning new machine with config: &{Name:multinode-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-009000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:18:47.794016    5380 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:18:47.802234    5380 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:18:47.820676    5380 start.go:159] libmachine.API.Create for "multinode-009000" (driver="qemu2")
	I0906 12:18:47.820713    5380 client.go:168] LocalClient.Create starting
	I0906 12:18:47.820778    5380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:18:47.820809    5380 main.go:141] libmachine: Decoding PEM data...
	I0906 12:18:47.820818    5380 main.go:141] libmachine: Parsing certificate...
	I0906 12:18:47.820860    5380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:18:47.820884    5380 main.go:141] libmachine: Decoding PEM data...
	I0906 12:18:47.820893    5380 main.go:141] libmachine: Parsing certificate...
	I0906 12:18:47.821244    5380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:18:47.970468    5380 main.go:141] libmachine: Creating SSH key...
	I0906 12:18:48.141206    5380 main.go:141] libmachine: Creating Disk image...
	I0906 12:18:48.141212    5380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:18:48.141423    5380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2
	I0906 12:18:48.151127    5380 main.go:141] libmachine: STDOUT: 
	I0906 12:18:48.151152    5380 main.go:141] libmachine: STDERR: 
	I0906 12:18:48.151200    5380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2 +20000M
	I0906 12:18:48.159010    5380 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:18:48.159024    5380 main.go:141] libmachine: STDERR: 
	I0906 12:18:48.159042    5380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2
	I0906 12:18:48.159047    5380 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:18:48.159061    5380 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:18:48.159089    5380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:fe:0a:65:ba:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2
	I0906 12:18:48.160706    5380 main.go:141] libmachine: STDOUT: 
	I0906 12:18:48.160721    5380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:18:48.160741    5380 client.go:171] duration metric: took 340.024583ms to LocalClient.Create
	I0906 12:18:50.162912    5380 start.go:128] duration metric: took 2.3688905s to createHost
	I0906 12:18:50.162975    5380 start.go:83] releasing machines lock for "multinode-009000", held for 2.369009667s
	W0906 12:18:50.163025    5380 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:18:50.175394    5380 out.go:177] * Deleting "multinode-009000" in qemu2 ...
	W0906 12:18:50.206464    5380 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:18:50.206492    5380 start.go:729] Will try again in 5 seconds ...
	I0906 12:18:55.208702    5380 start.go:360] acquireMachinesLock for multinode-009000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:18:55.209222    5380 start.go:364] duration metric: took 375.208µs to acquireMachinesLock for "multinode-009000"
	I0906 12:18:55.209378    5380 start.go:93] Provisioning new machine with config: &{Name:multinode-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-009000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:18:55.209608    5380 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:18:55.221457    5380 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:18:55.274919    5380 start.go:159] libmachine.API.Create for "multinode-009000" (driver="qemu2")
	I0906 12:18:55.274971    5380 client.go:168] LocalClient.Create starting
	I0906 12:18:55.275104    5380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:18:55.275173    5380 main.go:141] libmachine: Decoding PEM data...
	I0906 12:18:55.275191    5380 main.go:141] libmachine: Parsing certificate...
	I0906 12:18:55.275259    5380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:18:55.275304    5380 main.go:141] libmachine: Decoding PEM data...
	I0906 12:18:55.275317    5380 main.go:141] libmachine: Parsing certificate...
	I0906 12:18:55.275850    5380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:18:55.434841    5380 main.go:141] libmachine: Creating SSH key...
	I0906 12:18:55.500971    5380 main.go:141] libmachine: Creating Disk image...
	I0906 12:18:55.500981    5380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:18:55.501202    5380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2
	I0906 12:18:55.510311    5380 main.go:141] libmachine: STDOUT: 
	I0906 12:18:55.510340    5380 main.go:141] libmachine: STDERR: 
	I0906 12:18:55.510390    5380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2 +20000M
	I0906 12:18:55.518319    5380 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:18:55.518335    5380 main.go:141] libmachine: STDERR: 
	I0906 12:18:55.518344    5380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2
	I0906 12:18:55.518349    5380 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:18:55.518362    5380 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:18:55.518389    5380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:87:8a:42:c0:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2
	I0906 12:18:55.519997    5380 main.go:141] libmachine: STDOUT: 
	I0906 12:18:55.520012    5380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:18:55.520024    5380 client.go:171] duration metric: took 245.043542ms to LocalClient.Create
	I0906 12:18:57.522212    5380 start.go:128] duration metric: took 2.312582958s to createHost
	I0906 12:18:57.522295    5380 start.go:83] releasing machines lock for "multinode-009000", held for 2.313065667s
	W0906 12:18:57.522622    5380 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:18:57.531196    5380 out.go:201] 
	W0906 12:18:57.538224    5380 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:18:57.538279    5380 out.go:270] * 
	* 
	W0906 12:18:57.540601    5380 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:18:57.550211    5380 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-009000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (65.862958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (98.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (127.445084ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-009000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- rollout status deployment/busybox: exit status 1 (56.98675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.72975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.708417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.919917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.113584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.698416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.325792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.310209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.537584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.009375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0906 12:20:25.360412    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.386542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.901791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.374959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.092542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.483083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (29.508709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (98.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-009000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.138458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (30.776959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-009000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-009000 -v 3 --alsologtostderr: exit status 83 (44.615666ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-009000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-009000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:35.998083    5547 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:35.998230    5547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:35.998234    5547 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:35.998236    5547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:35.998363    5547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:20:35.998608    5547 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:20:35.998806    5547 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:36.003898    5547 out.go:177] * The control-plane node multinode-009000 host is not running: state=Stopped
	I0906 12:20:36.007843    5547 out.go:177]   To start a cluster, run: "minikube start -p multinode-009000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-009000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (30.819916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-009000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-009000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.512917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-009000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-009000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-009000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (29.984833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-009000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-009000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-009000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,
\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-009000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"C
ontrolPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\
"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (30.138875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status --output json --alsologtostderr: exit status 7 (30.026667ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-009000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:36.207023    5559 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:36.207399    5559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:36.207404    5559 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:36.207406    5559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:36.207586    5559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:20:36.207736    5559 out.go:352] Setting JSON to true
	I0906 12:20:36.207746    5559 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:20:36.207826    5559 notify.go:220] Checking for updates...
	I0906 12:20:36.208197    5559 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:36.208207    5559 status.go:255] checking status of multinode-009000 ...
	I0906 12:20:36.208394    5559 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:20:36.208398    5559 status.go:343] host is not running, skipping remaining checks
	I0906 12:20:36.208401    5559 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-009000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (30.798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 node stop m03: exit status 85 (43.924291ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-009000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status: exit status 7 (30.119125ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status --alsologtostderr: exit status 7 (30.884667ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:36.344079    5567 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:36.344236    5567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:36.344239    5567 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:36.344241    5567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:36.344381    5567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:20:36.344508    5567 out.go:352] Setting JSON to false
	I0906 12:20:36.344519    5567 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:20:36.344577    5567 notify.go:220] Checking for updates...
	I0906 12:20:36.344744    5567 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:36.344751    5567 status.go:255] checking status of multinode-009000 ...
	I0906 12:20:36.344956    5567 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:20:36.344960    5567 status.go:343] host is not running, skipping remaining checks
	I0906 12:20:36.344962    5567 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-009000 status --alsologtostderr": multinode-009000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (29.892958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (47.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.746667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:36.404631    5571 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:36.404874    5571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:36.404877    5571 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:36.404879    5571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:36.404997    5571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:20:36.405238    5571 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:20:36.405442    5571 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:36.409817    5571 out.go:201] 
	W0906 12:20:36.412842    5571 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0906 12:20:36.412847    5571 out.go:270] * 
	* 
	W0906 12:20:36.414413    5571 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:20:36.417838    5571 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0906 12:20:36.404631    5571 out.go:345] Setting OutFile to fd 1 ...
I0906 12:20:36.404874    5571 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 12:20:36.404877    5571 out.go:358] Setting ErrFile to fd 2...
I0906 12:20:36.404879    5571 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 12:20:36.404997    5571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
I0906 12:20:36.405238    5571 mustload.go:65] Loading cluster: multinode-009000
I0906 12:20:36.405442    5571 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 12:20:36.409817    5571 out.go:201] 
W0906 12:20:36.412842    5571 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0906 12:20:36.412847    5571 out.go:270] * 
* 
W0906 12:20:36.414413    5571 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0906 12:20:36.417838    5571 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-009000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr: exit status 7 (29.850708ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:36.451106    5573 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:36.451240    5573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:36.451243    5573 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:36.451245    5573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:36.451358    5573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:20:36.451481    5573 out.go:352] Setting JSON to false
	I0906 12:20:36.451492    5573 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:20:36.451543    5573 notify.go:220] Checking for updates...
	I0906 12:20:36.451711    5573 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:36.451718    5573 status.go:255] checking status of multinode-009000 ...
	I0906 12:20:36.451915    5573 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:20:36.451919    5573 status.go:343] host is not running, skipping remaining checks
	I0906 12:20:36.451921    5573 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr: exit status 7 (73.3025ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:37.215102    5577 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:37.215301    5577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:37.215305    5577 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:37.215308    5577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:37.215502    5577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:20:37.215654    5577 out.go:352] Setting JSON to false
	I0906 12:20:37.215667    5577 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:20:37.215719    5577 notify.go:220] Checking for updates...
	I0906 12:20:37.215929    5577 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:37.215939    5577 status.go:255] checking status of multinode-009000 ...
	I0906 12:20:37.216246    5577 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:20:37.216251    5577 status.go:343] host is not running, skipping remaining checks
	I0906 12:20:37.216255    5577 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr: exit status 7 (72.012333ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:39.137819    5579 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:39.138021    5579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:39.138026    5579 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:39.138028    5579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:39.138190    5579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:20:39.138354    5579 out.go:352] Setting JSON to false
	I0906 12:20:39.138367    5579 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:20:39.138411    5579 notify.go:220] Checking for updates...
	I0906 12:20:39.138670    5579 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:39.138681    5579 status.go:255] checking status of multinode-009000 ...
	I0906 12:20:39.138992    5579 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:20:39.138997    5579 status.go:343] host is not running, skipping remaining checks
	I0906 12:20:39.139000    5579 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr: exit status 7 (73.396209ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:40.987984    5583 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:40.988211    5583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:40.988216    5583 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:40.988222    5583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:40.988403    5583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:20:40.988572    5583 out.go:352] Setting JSON to false
	I0906 12:20:40.988587    5583 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:20:40.988630    5583 notify.go:220] Checking for updates...
	I0906 12:20:40.988875    5583 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:40.988885    5583 status.go:255] checking status of multinode-009000 ...
	I0906 12:20:40.989174    5583 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:20:40.989179    5583 status.go:343] host is not running, skipping remaining checks
	I0906 12:20:40.989183    5583 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr: exit status 7 (75.526667ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:45.267336    5587 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:45.267535    5587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:45.267539    5587 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:45.267542    5587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:45.267736    5587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:20:45.267878    5587 out.go:352] Setting JSON to false
	I0906 12:20:45.267906    5587 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:20:45.267946    5587 notify.go:220] Checking for updates...
	I0906 12:20:45.268147    5587 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:45.268156    5587 status.go:255] checking status of multinode-009000 ...
	I0906 12:20:45.268445    5587 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:20:45.268450    5587 status.go:343] host is not running, skipping remaining checks
	I0906 12:20:45.268453    5587 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr: exit status 7 (73.083833ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:51.416542    5595 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:51.416751    5595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:51.416756    5595 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:51.416759    5595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:51.416920    5595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:20:51.417071    5595 out.go:352] Setting JSON to false
	I0906 12:20:51.417087    5595 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:20:51.417124    5595 notify.go:220] Checking for updates...
	I0906 12:20:51.417363    5595 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:51.417371    5595 status.go:255] checking status of multinode-009000 ...
	I0906 12:20:51.417673    5595 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:20:51.417678    5595 status.go:343] host is not running, skipping remaining checks
	I0906 12:20:51.417681    5595 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr: exit status 7 (71.405667ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:55.436706    5603 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:55.436879    5603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:55.436883    5603 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:55.436886    5603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:55.437034    5603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:20:55.437192    5603 out.go:352] Setting JSON to false
	I0906 12:20:55.437205    5603 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:20:55.437244    5603 notify.go:220] Checking for updates...
	I0906 12:20:55.437468    5603 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:55.437477    5603 status.go:255] checking status of multinode-009000 ...
	I0906 12:20:55.437746    5603 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:20:55.437751    5603 status.go:343] host is not running, skipping remaining checks
	I0906 12:20:55.437754    5603 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr: exit status 7 (73.411625ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:21:09.226669    5615 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:21:09.226874    5615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:09.226878    5615 out.go:358] Setting ErrFile to fd 2...
	I0906 12:21:09.226881    5615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:09.227053    5615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:21:09.227196    5615 out.go:352] Setting JSON to false
	I0906 12:21:09.227209    5615 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:21:09.227253    5615 notify.go:220] Checking for updates...
	I0906 12:21:09.227476    5615 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:21:09.227485    5615 status.go:255] checking status of multinode-009000 ...
	I0906 12:21:09.227773    5615 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:21:09.227778    5615 status.go:343] host is not running, skipping remaining checks
	I0906 12:21:09.227781    5615 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr: exit status 7 (74.103167ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:21:23.960238    5628 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:21:23.960419    5628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:23.960424    5628 out.go:358] Setting ErrFile to fd 2...
	I0906 12:21:23.960427    5628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:23.960591    5628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:21:23.960742    5628 out.go:352] Setting JSON to false
	I0906 12:21:23.960755    5628 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:21:23.960787    5628 notify.go:220] Checking for updates...
	I0906 12:21:23.961015    5628 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:21:23.961023    5628 status.go:255] checking status of multinode-009000 ...
	I0906 12:21:23.961302    5628 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:21:23.961307    5628 status.go:343] host is not running, skipping remaining checks
	I0906 12:21:23.961310    5628 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-009000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (33.529167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (47.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-009000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-009000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-009000: (3.3597125s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-009000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-009000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.225069458s)

                                                
                                                
-- stdout --
	* [multinode-009000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-009000" primary control-plane node in "multinode-009000" cluster
	* Restarting existing qemu2 VM for "multinode-009000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-009000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:21:27.450743    5656 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:21:27.450914    5656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:27.450919    5656 out.go:358] Setting ErrFile to fd 2...
	I0906 12:21:27.450921    5656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:27.451085    5656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:21:27.452335    5656 out.go:352] Setting JSON to false
	I0906 12:21:27.471822    5656 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4857,"bootTime":1725645630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:21:27.471895    5656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:21:27.477331    5656 out.go:177] * [multinode-009000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:21:27.484329    5656 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:21:27.484380    5656 notify.go:220] Checking for updates...
	I0906 12:21:27.491257    5656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:21:27.494324    5656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:21:27.497317    5656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:21:27.500226    5656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:21:27.503332    5656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:21:27.506668    5656 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:21:27.506736    5656 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:21:27.511286    5656 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:21:27.518294    5656 start.go:297] selected driver: qemu2
	I0906 12:21:27.518301    5656 start.go:901] validating driver "qemu2" against &{Name:multinode-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-009000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:21:27.518352    5656 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:21:27.520756    5656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:21:27.520813    5656 cni.go:84] Creating CNI manager for ""
	I0906 12:21:27.520819    5656 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0906 12:21:27.520857    5656 start.go:340] cluster config:
	{Name:multinode-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:21:27.524503    5656 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:21:27.532235    5656 out.go:177] * Starting "multinode-009000" primary control-plane node in "multinode-009000" cluster
	I0906 12:21:27.535146    5656 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:21:27.535162    5656 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:21:27.535170    5656 cache.go:56] Caching tarball of preloaded images
	I0906 12:21:27.535235    5656 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:21:27.535241    5656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:21:27.535328    5656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/multinode-009000/config.json ...
	I0906 12:21:27.535775    5656 start.go:360] acquireMachinesLock for multinode-009000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:21:27.535810    5656 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "multinode-009000"
	I0906 12:21:27.535820    5656 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:21:27.535826    5656 fix.go:54] fixHost starting: 
	I0906 12:21:27.535946    5656 fix.go:112] recreateIfNeeded on multinode-009000: state=Stopped err=<nil>
	W0906 12:21:27.535955    5656 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:21:27.544143    5656 out.go:177] * Restarting existing qemu2 VM for "multinode-009000" ...
	I0906 12:21:27.548232    5656 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:21:27.548272    5656 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:87:8a:42:c0:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2
	I0906 12:21:27.550431    5656 main.go:141] libmachine: STDOUT: 
	I0906 12:21:27.550454    5656 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:21:27.550488    5656 fix.go:56] duration metric: took 14.662416ms for fixHost
	I0906 12:21:27.550493    5656 start.go:83] releasing machines lock for "multinode-009000", held for 14.678708ms
	W0906 12:21:27.550499    5656 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:21:27.550537    5656 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:21:27.550542    5656 start.go:729] Will try again in 5 seconds ...
	I0906 12:21:32.552774    5656 start.go:360] acquireMachinesLock for multinode-009000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:21:32.553155    5656 start.go:364] duration metric: took 283.417µs to acquireMachinesLock for "multinode-009000"
	I0906 12:21:32.553308    5656 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:21:32.553330    5656 fix.go:54] fixHost starting: 
	I0906 12:21:32.554058    5656 fix.go:112] recreateIfNeeded on multinode-009000: state=Stopped err=<nil>
	W0906 12:21:32.554085    5656 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:21:32.558570    5656 out.go:177] * Restarting existing qemu2 VM for "multinode-009000" ...
	I0906 12:21:32.565357    5656 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:21:32.565589    5656 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:87:8a:42:c0:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2
	I0906 12:21:32.575232    5656 main.go:141] libmachine: STDOUT: 
	I0906 12:21:32.575297    5656 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:21:32.575373    5656 fix.go:56] duration metric: took 22.047166ms for fixHost
	I0906 12:21:32.575391    5656 start.go:83] releasing machines lock for "multinode-009000", held for 22.214583ms
	W0906 12:21:32.575563    5656 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-009000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-009000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:21:32.583613    5656 out.go:201] 
	W0906 12:21:32.587534    5656 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:21:32.587566    5656 out.go:270] * 
	* 
	W0906 12:21:32.590155    5656 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:21:32.597517    5656 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-009000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-009000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (32.189083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 node delete m03: exit status 83 (38.219041ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-009000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-009000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-009000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status --alsologtostderr: exit status 7 (29.898792ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:21:32.779656    5676 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:21:32.779790    5676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:32.779793    5676 out.go:358] Setting ErrFile to fd 2...
	I0906 12:21:32.779796    5676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:32.779924    5676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:21:32.780033    5676 out.go:352] Setting JSON to false
	I0906 12:21:32.780044    5676 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:21:32.780089    5676 notify.go:220] Checking for updates...
	I0906 12:21:32.780270    5676 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:21:32.780277    5676 status.go:255] checking status of multinode-009000 ...
	I0906 12:21:32.780477    5676 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:21:32.780480    5676 status.go:343] host is not running, skipping remaining checks
	I0906 12:21:32.780482    5676 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-009000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (30.163125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-009000 stop: (3.142882375s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status: exit status 7 (71.618083ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-009000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-009000 status --alsologtostderr: exit status 7 (33.706417ms)

                                                
                                                
-- stdout --
	multinode-009000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:21:36.058799    5702 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:21:36.058948    5702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:36.058953    5702 out.go:358] Setting ErrFile to fd 2...
	I0906 12:21:36.058956    5702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:36.059084    5702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:21:36.059216    5702 out.go:352] Setting JSON to false
	I0906 12:21:36.059225    5702 mustload.go:65] Loading cluster: multinode-009000
	I0906 12:21:36.059284    5702 notify.go:220] Checking for updates...
	I0906 12:21:36.059417    5702 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:21:36.059424    5702 status.go:255] checking status of multinode-009000 ...
	I0906 12:21:36.059638    5702 status.go:330] multinode-009000 host status = "Stopped" (err=<nil>)
	I0906 12:21:36.059642    5702 status.go:343] host is not running, skipping remaining checks
	I0906 12:21:36.059644    5702 status.go:257] multinode-009000 status: &{Name:multinode-009000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-009000 status --alsologtostderr": multinode-009000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-009000 status --alsologtostderr": multinode-009000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (30.868791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-009000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-009000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.185862917s)

                                                
                                                
-- stdout --
	* [multinode-009000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-009000" primary control-plane node in "multinode-009000" cluster
	* Restarting existing qemu2 VM for "multinode-009000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-009000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:21:36.120065    5706 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:21:36.120172    5706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:36.120175    5706 out.go:358] Setting ErrFile to fd 2...
	I0906 12:21:36.120178    5706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:21:36.120315    5706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:21:36.121393    5706 out.go:352] Setting JSON to false
	I0906 12:21:36.137422    5706 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4866,"bootTime":1725645630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:21:36.137498    5706 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:21:36.142599    5706 out.go:177] * [multinode-009000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:21:36.149627    5706 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:21:36.149662    5706 notify.go:220] Checking for updates...
	I0906 12:21:36.157614    5706 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:21:36.160545    5706 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:21:36.163541    5706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:21:36.167621    5706 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:21:36.170492    5706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:21:36.173871    5706 config.go:182] Loaded profile config "multinode-009000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:21:36.174145    5706 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:21:36.178523    5706 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:21:36.185584    5706 start.go:297] selected driver: qemu2
	I0906 12:21:36.185593    5706 start.go:901] validating driver "qemu2" against &{Name:multinode-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-009000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:21:36.185657    5706 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:21:36.187918    5706 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:21:36.187973    5706 cni.go:84] Creating CNI manager for ""
	I0906 12:21:36.187978    5706 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0906 12:21:36.188025    5706 start.go:340] cluster config:
	{Name:multinode-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:21:36.191711    5706 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:21:36.199549    5706 out.go:177] * Starting "multinode-009000" primary control-plane node in "multinode-009000" cluster
	I0906 12:21:36.203525    5706 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:21:36.203542    5706 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:21:36.203551    5706 cache.go:56] Caching tarball of preloaded images
	I0906 12:21:36.203619    5706 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:21:36.203630    5706 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:21:36.203698    5706 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/multinode-009000/config.json ...
	I0906 12:21:36.204139    5706 start.go:360] acquireMachinesLock for multinode-009000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:21:36.204176    5706 start.go:364] duration metric: took 30.542µs to acquireMachinesLock for "multinode-009000"
	I0906 12:21:36.204186    5706 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:21:36.204192    5706 fix.go:54] fixHost starting: 
	I0906 12:21:36.204316    5706 fix.go:112] recreateIfNeeded on multinode-009000: state=Stopped err=<nil>
	W0906 12:21:36.204323    5706 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:21:36.208454    5706 out.go:177] * Restarting existing qemu2 VM for "multinode-009000" ...
	I0906 12:21:36.216536    5706 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:21:36.216582    5706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:87:8a:42:c0:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2
	I0906 12:21:36.218723    5706 main.go:141] libmachine: STDOUT: 
	I0906 12:21:36.218749    5706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:21:36.218781    5706 fix.go:56] duration metric: took 14.589417ms for fixHost
	I0906 12:21:36.218788    5706 start.go:83] releasing machines lock for "multinode-009000", held for 14.60825ms
	W0906 12:21:36.218796    5706 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:21:36.218824    5706 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:21:36.218829    5706 start.go:729] Will try again in 5 seconds ...
	I0906 12:21:41.221035    5706 start.go:360] acquireMachinesLock for multinode-009000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:21:41.221620    5706 start.go:364] duration metric: took 440.292µs to acquireMachinesLock for "multinode-009000"
	I0906 12:21:41.221773    5706 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:21:41.221818    5706 fix.go:54] fixHost starting: 
	I0906 12:21:41.222613    5706 fix.go:112] recreateIfNeeded on multinode-009000: state=Stopped err=<nil>
	W0906 12:21:41.222641    5706 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:21:41.226808    5706 out.go:177] * Restarting existing qemu2 VM for "multinode-009000" ...
	I0906 12:21:41.233648    5706 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:21:41.233864    5706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:87:8a:42:c0:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/multinode-009000/disk.qcow2
	I0906 12:21:41.243723    5706 main.go:141] libmachine: STDOUT: 
	I0906 12:21:41.243797    5706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:21:41.243891    5706 fix.go:56] duration metric: took 22.080833ms for fixHost
	I0906 12:21:41.243914    5706 start.go:83] releasing machines lock for "multinode-009000", held for 22.234042ms
	W0906 12:21:41.244127    5706 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-009000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-009000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:21:41.250708    5706 out.go:201] 
	W0906 12:21:41.254667    5706 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:21:41.254690    5706 out.go:270] * 
	* 
	W0906 12:21:41.257377    5706 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:21:41.264604    5706 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-009000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (68.782417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-009000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-009000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-009000-m01 --driver=qemu2 : exit status 80 (9.791125958s)

                                                
                                                
-- stdout --
	* [multinode-009000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-009000-m01" primary control-plane node in "multinode-009000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-009000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-009000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-009000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-009000-m02 --driver=qemu2 : exit status 80 (9.870381542s)

                                                
                                                
-- stdout --
	* [multinode-009000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-009000-m02" primary control-plane node in "multinode-009000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-009000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-009000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-009000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-009000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-009000: exit status 83 (84.672208ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-009000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-009000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-009000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-009000 -n multinode-009000: exit status 7 (31.304667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.90s)

                                                
                                    
x
+
TestPreload (9.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-373000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0906 12:22:09.139428    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-373000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.794473s)

                                                
                                                
-- stdout --
	* [test-preload-373000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-373000" primary control-plane node in "test-preload-373000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-373000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:22:01.383662    5782 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:22:01.383785    5782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:01.383788    5782 out.go:358] Setting ErrFile to fd 2...
	I0906 12:22:01.383790    5782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:01.383928    5782 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:22:01.385041    5782 out.go:352] Setting JSON to false
	I0906 12:22:01.401283    5782 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4891,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:22:01.401359    5782 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:22:01.407365    5782 out.go:177] * [test-preload-373000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:22:01.415468    5782 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:22:01.415526    5782 notify.go:220] Checking for updates...
	I0906 12:22:01.423297    5782 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:22:01.426349    5782 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:22:01.429303    5782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:22:01.432352    5782 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:22:01.435344    5782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:22:01.438737    5782 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:22:01.438786    5782 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:22:01.443283    5782 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:22:01.450334    5782 start.go:297] selected driver: qemu2
	I0906 12:22:01.450342    5782 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:22:01.450350    5782 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:22:01.452674    5782 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:22:01.455351    5782 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:22:01.458464    5782 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:22:01.458535    5782 cni.go:84] Creating CNI manager for ""
	I0906 12:22:01.458545    5782 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:22:01.458561    5782 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:22:01.458607    5782 start.go:340] cluster config:
	{Name:test-preload-373000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Socket
VMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:22:01.462414    5782 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:01.470348    5782 out.go:177] * Starting "test-preload-373000" primary control-plane node in "test-preload-373000" cluster
	I0906 12:22:01.474172    5782 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0906 12:22:01.474251    5782 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/test-preload-373000/config.json ...
	I0906 12:22:01.474252    5782 cache.go:107] acquiring lock: {Name:mkab7a7d4abedf3c4819d7aa829fcdb26da0e508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:01.474252    5782 cache.go:107] acquiring lock: {Name:mk00c96dc66bd89a4b57774d52d7b6b20c9d8f8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:01.474255    5782 cache.go:107] acquiring lock: {Name:mkc8214d221df97e1b9e7cd5eb96eb7606782ce0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:01.474278    5782 cache.go:107] acquiring lock: {Name:mk213da2472df452337e3ffd2009a9611941d16f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:01.474267    5782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/test-preload-373000/config.json: {Name:mk27ccedc18bf4f9dc0f43f926e557885147e1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:22:01.474468    5782 cache.go:107] acquiring lock: {Name:mk67ff88cae4267f4cd6f56611a8fe7871a17230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:01.474502    5782 cache.go:107] acquiring lock: {Name:mkaab3a7605f99013e6f5b702b390264ac163748 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:01.474517    5782 cache.go:107] acquiring lock: {Name:mk6f2f14bc294c62b4099b7e48cc26a6c2f9aedd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:01.474588    5782 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:22:01.474598    5782 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 12:22:01.474533    5782 cache.go:107] acquiring lock: {Name:mk93bb9091a9290153c90bc2d0acc58eb15cd201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:01.474665    5782 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0906 12:22:01.474588    5782 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0906 12:22:01.474694    5782 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:22:01.474693    5782 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0906 12:22:01.474719    5782 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0906 12:22:01.474783    5782 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:22:01.474953    5782 start.go:360] acquireMachinesLock for test-preload-373000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:01.474995    5782 start.go:364] duration metric: took 32.833µs to acquireMachinesLock for "test-preload-373000"
	I0906 12:22:01.475008    5782 start.go:93] Provisioning new machine with config: &{Name:test-preload-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-3
73000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:22:01.475102    5782 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:22:01.483281    5782 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:22:01.487165    5782 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0906 12:22:01.487182    5782 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0906 12:22:01.487205    5782 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 12:22:01.487211    5782 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:22:01.487171    5782 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0906 12:22:01.487257    5782 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0906 12:22:01.487293    5782 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:22:01.487388    5782 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:22:01.501501    5782 start.go:159] libmachine.API.Create for "test-preload-373000" (driver="qemu2")
	I0906 12:22:01.501537    5782 client.go:168] LocalClient.Create starting
	I0906 12:22:01.501607    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:22:01.501640    5782 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:01.501654    5782 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:01.501687    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:22:01.501713    5782 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:01.501719    5782 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:01.502071    5782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:22:01.654247    5782 main.go:141] libmachine: Creating SSH key...
	I0906 12:22:01.705801    5782 main.go:141] libmachine: Creating Disk image...
	I0906 12:22:01.705821    5782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:22:01.706036    5782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2
	I0906 12:22:01.715534    5782 main.go:141] libmachine: STDOUT: 
	I0906 12:22:01.715562    5782 main.go:141] libmachine: STDERR: 
	I0906 12:22:01.715659    5782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2 +20000M
	I0906 12:22:01.724926    5782 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:22:01.724947    5782 main.go:141] libmachine: STDERR: 
	I0906 12:22:01.724961    5782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2
	I0906 12:22:01.724968    5782 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:22:01.724986    5782 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:22:01.725020    5782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:06:73:50:46:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2
	I0906 12:22:01.726979    5782 main.go:141] libmachine: STDOUT: 
	I0906 12:22:01.726996    5782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:01.727016    5782 client.go:171] duration metric: took 225.473042ms to LocalClient.Create
	I0906 12:22:01.954097    5782 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0906 12:22:01.959621    5782 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0906 12:22:01.963558    5782 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0906 12:22:01.967625    5782 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0906 12:22:01.981532    5782 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0906 12:22:02.004044    5782 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0906 12:22:02.006502    5782 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0906 12:22:02.006533    5782 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0906 12:22:02.163454    5782 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0906 12:22:02.163498    5782 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 689.047792ms
	I0906 12:22:02.163539    5782 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0906 12:22:02.610015    5782 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0906 12:22:02.610123    5782 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 12:22:03.669702    5782 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 12:22:03.669785    5782 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.195544583s
	I0906 12:22:03.669838    5782 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 12:22:03.727369    5782 start.go:128] duration metric: took 2.252260166s to createHost
	I0906 12:22:03.727417    5782 start.go:83] releasing machines lock for "test-preload-373000", held for 2.252428833s
	W0906 12:22:03.727454    5782 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:03.740705    5782 out.go:177] * Deleting "test-preload-373000" in qemu2 ...
	W0906 12:22:03.766976    5782 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:03.767002    5782 start.go:729] Will try again in 5 seconds ...
	I0906 12:22:03.874448    5782 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0906 12:22:03.874489    5782 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.400256541s
	I0906 12:22:03.874515    5782 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0906 12:22:04.440848    5782 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0906 12:22:04.440893    5782 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.966441625s
	I0906 12:22:04.440920    5782 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0906 12:22:06.675574    5782 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0906 12:22:06.675619    5782 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.201405708s
	I0906 12:22:06.675642    5782 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0906 12:22:07.121393    5782 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0906 12:22:07.121441    5782 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.647198875s
	I0906 12:22:07.121465    5782 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0906 12:22:07.415447    5782 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0906 12:22:07.415492    5782 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.941075375s
	I0906 12:22:07.415517    5782 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0906 12:22:08.767579    5782 start.go:360] acquireMachinesLock for test-preload-373000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:08.767997    5782 start.go:364] duration metric: took 313.167µs to acquireMachinesLock for "test-preload-373000"
	I0906 12:22:08.768134    5782 start.go:93] Provisioning new machine with config: &{Name:test-preload-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-3
73000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:22:08.768413    5782 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:22:08.774889    5782 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:22:08.825009    5782 start.go:159] libmachine.API.Create for "test-preload-373000" (driver="qemu2")
	I0906 12:22:08.825141    5782 client.go:168] LocalClient.Create starting
	I0906 12:22:08.825258    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:22:08.825334    5782 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:08.825355    5782 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:08.825421    5782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:22:08.825464    5782 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:08.825478    5782 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:08.826000    5782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:22:08.988561    5782 main.go:141] libmachine: Creating SSH key...
	I0906 12:22:09.072269    5782 main.go:141] libmachine: Creating Disk image...
	I0906 12:22:09.072277    5782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:22:09.072503    5782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2
	I0906 12:22:09.082241    5782 main.go:141] libmachine: STDOUT: 
	I0906 12:22:09.082298    5782 main.go:141] libmachine: STDERR: 
	I0906 12:22:09.082340    5782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2 +20000M
	I0906 12:22:09.090452    5782 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:22:09.090510    5782 main.go:141] libmachine: STDERR: 
	I0906 12:22:09.090524    5782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2
	I0906 12:22:09.090535    5782 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:22:09.090543    5782 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:22:09.090584    5782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:a6:71:a5:2e:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/test-preload-373000/disk.qcow2
	I0906 12:22:09.092413    5782 main.go:141] libmachine: STDOUT: 
	I0906 12:22:09.092428    5782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:09.092442    5782 client.go:171] duration metric: took 267.297375ms to LocalClient.Create
	I0906 12:22:11.094062    5782 start.go:128] duration metric: took 2.32561925s to createHost
	I0906 12:22:11.094134    5782 start.go:83] releasing machines lock for "test-preload-373000", held for 2.326130708s
	W0906 12:22:11.094388    5782 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-373000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-373000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:11.109894    5782 out.go:201] 
	W0906 12:22:11.114325    5782 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:22:11.114358    5782 out.go:270] * 
	* 
	W0906 12:22:11.116870    5782 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:22:11.132077    5782 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-373000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-09-06 12:22:11.151363 -0700 PDT m=+3215.769173126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-373000 -n test-preload-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-373000 -n test-preload-373000: exit status 7 (68.203292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-373000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-373000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-373000
--- FAIL: TestPreload (9.95s)

                                                
                                    
x
+
TestScheduledStopUnix (10.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-542000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-542000 --memory=2048 --driver=qemu2 : exit status 80 (10.079807583s)

                                                
                                                
-- stdout --
	* [scheduled-stop-542000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-542000" primary control-plane node in "scheduled-stop-542000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-542000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-542000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-542000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-542000" primary control-plane node in "scheduled-stop-542000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-542000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-542000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-09-06 12:22:21.380905 -0700 PDT m=+3225.998789167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-542000 -n scheduled-stop-542000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-542000 -n scheduled-stop-542000: exit status 7 (69.008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-542000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-542000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-542000
--- FAIL: TestScheduledStopUnix (10.23s)

                                                
                                    
x
+
TestSkaffold (12.7s)

                                                
                                                
=== RUN   TestSkaffold
E0906 12:22:22.271635    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2694914901 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2694914901 version: (1.063738708s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-903000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-903000 --memory=2600 --driver=qemu2 : exit status 80 (10.134253375s)

                                                
                                                
-- stdout --
	* [skaffold-903000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-903000" primary control-plane node in "skaffold-903000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-903000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-903000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-903000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-903000" primary control-plane node in "skaffold-903000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-903000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-903000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-09-06 12:22:34.08849 -0700 PDT m=+3238.706465292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-903000 -n skaffold-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-903000 -n skaffold-903000: exit status 7 (62.195042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-903000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-903000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-903000
--- FAIL: TestSkaffold (12.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (601.23s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2337052735 start -p running-upgrade-549000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2337052735 start -p running-upgrade-549000 --memory=2200 --vm-driver=qemu2 : (50.533558542s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-549000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-549000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m35.760395166s)

                                                
                                                
-- stdout --
	* [running-upgrade-549000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-549000" primary control-plane node in "running-upgrade-549000" cluster
	* Updating the running qemu2 "running-upgrade-549000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:23:47.097535    6165 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:23:47.097670    6165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:23:47.097674    6165 out.go:358] Setting ErrFile to fd 2...
	I0906 12:23:47.097676    6165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:23:47.097811    6165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:23:47.098812    6165 out.go:352] Setting JSON to false
	I0906 12:23:47.115071    6165 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4997,"bootTime":1725645630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:23:47.115150    6165 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:23:47.120202    6165 out.go:177] * [running-upgrade-549000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:23:47.128228    6165 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:23:47.128290    6165 notify.go:220] Checking for updates...
	I0906 12:23:47.135194    6165 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:23:47.138186    6165 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:23:47.141188    6165 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:23:47.151193    6165 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:23:47.155225    6165 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:23:47.158420    6165 config.go:182] Loaded profile config "running-upgrade-549000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0906 12:23:47.162204    6165 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 12:23:47.165161    6165 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:23:47.169143    6165 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:23:47.176241    6165 start.go:297] selected driver: qemu2
	I0906 12:23:47.176246    6165 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50251 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0906 12:23:47.176289    6165 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:23:47.178429    6165 cni.go:84] Creating CNI manager for ""
	I0906 12:23:47.178445    6165 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:23:47.178465    6165 start.go:340] cluster config:
	{Name:running-upgrade-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50251 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0906 12:23:47.178517    6165 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:23:47.185184    6165 out.go:177] * Starting "running-upgrade-549000" primary control-plane node in "running-upgrade-549000" cluster
	I0906 12:23:47.188120    6165 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0906 12:23:47.188133    6165 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0906 12:23:47.188140    6165 cache.go:56] Caching tarball of preloaded images
	I0906 12:23:47.188183    6165 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:23:47.188188    6165 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0906 12:23:47.188242    6165 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/config.json ...
	I0906 12:23:47.188574    6165 start.go:360] acquireMachinesLock for running-upgrade-549000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:23:51.037508    6165 start.go:364] duration metric: took 3.848947s to acquireMachinesLock for "running-upgrade-549000"
	I0906 12:23:51.037553    6165 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:23:51.037558    6165 fix.go:54] fixHost starting: 
	I0906 12:23:51.038290    6165 fix.go:112] recreateIfNeeded on running-upgrade-549000: state=Running err=<nil>
	W0906 12:23:51.038300    6165 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:23:51.045387    6165 out.go:177] * Updating the running qemu2 "running-upgrade-549000" VM ...
	I0906 12:23:51.049375    6165 machine.go:93] provisionDockerMachine start ...
	I0906 12:23:51.049420    6165 main.go:141] libmachine: Using SSH client type: native
	I0906 12:23:51.049550    6165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10056c5a0] 0x10056ee00 <nil>  [] 0s} localhost 50219 <nil> <nil>}
	I0906 12:23:51.049555    6165 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:23:51.123754    6165 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-549000
	
	I0906 12:23:51.123770    6165 buildroot.go:166] provisioning hostname "running-upgrade-549000"
	I0906 12:23:51.123822    6165 main.go:141] libmachine: Using SSH client type: native
	I0906 12:23:51.123947    6165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10056c5a0] 0x10056ee00 <nil>  [] 0s} localhost 50219 <nil> <nil>}
	I0906 12:23:51.123955    6165 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-549000 && echo "running-upgrade-549000" | sudo tee /etc/hostname
	I0906 12:23:51.202217    6165 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-549000
	
	I0906 12:23:51.202270    6165 main.go:141] libmachine: Using SSH client type: native
	I0906 12:23:51.202391    6165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10056c5a0] 0x10056ee00 <nil>  [] 0s} localhost 50219 <nil> <nil>}
	I0906 12:23:51.202401    6165 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-549000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-549000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-549000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:23:51.272941    6165 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:23:51.272953    6165 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-2143/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-2143/.minikube}
	I0906 12:23:51.272960    6165 buildroot.go:174] setting up certificates
	I0906 12:23:51.272966    6165 provision.go:84] configureAuth start
	I0906 12:23:51.272969    6165 provision.go:143] copyHostCerts
	I0906 12:23:51.273039    6165 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem, removing ...
	I0906 12:23:51.273044    6165 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem
	I0906 12:23:51.273407    6165 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem (1123 bytes)
	I0906 12:23:51.273597    6165 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem, removing ...
	I0906 12:23:51.273601    6165 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem
	I0906 12:23:51.273651    6165 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem (1675 bytes)
	I0906 12:23:51.273757    6165 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem, removing ...
	I0906 12:23:51.273760    6165 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem
	I0906 12:23:51.273807    6165 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem (1082 bytes)
	I0906 12:23:51.273887    6165 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-549000 san=[127.0.0.1 localhost minikube running-upgrade-549000]
	I0906 12:23:51.335107    6165 provision.go:177] copyRemoteCerts
	I0906 12:23:51.335143    6165 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:23:51.335151    6165 sshutil.go:53] new ssh client: &{IP:localhost Port:50219 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/running-upgrade-549000/id_rsa Username:docker}
	I0906 12:23:51.375192    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 12:23:51.384045    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0906 12:23:51.392220    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:23:51.399580    6165 provision.go:87] duration metric: took 126.599417ms to configureAuth
	I0906 12:23:51.399594    6165 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:23:51.399733    6165 config.go:182] Loaded profile config "running-upgrade-549000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0906 12:23:51.399778    6165 main.go:141] libmachine: Using SSH client type: native
	I0906 12:23:51.399886    6165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10056c5a0] 0x10056ee00 <nil>  [] 0s} localhost 50219 <nil> <nil>}
	I0906 12:23:51.399892    6165 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:23:51.474950    6165 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:23:51.474964    6165 buildroot.go:70] root file system type: tmpfs
	I0906 12:23:51.475019    6165 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:23:51.475072    6165 main.go:141] libmachine: Using SSH client type: native
	I0906 12:23:51.475190    6165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10056c5a0] 0x10056ee00 <nil>  [] 0s} localhost 50219 <nil> <nil>}
	I0906 12:23:51.475223    6165 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:23:51.552885    6165 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:23:51.552943    6165 main.go:141] libmachine: Using SSH client type: native
	I0906 12:23:51.553072    6165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10056c5a0] 0x10056ee00 <nil>  [] 0s} localhost 50219 <nil> <nil>}
	I0906 12:23:51.553081    6165 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:23:51.628637    6165 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:23:51.628651    6165 machine.go:96] duration metric: took 579.273791ms to provisionDockerMachine
	I0906 12:23:51.628657    6165 start.go:293] postStartSetup for "running-upgrade-549000" (driver="qemu2")
	I0906 12:23:51.628663    6165 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:23:51.628715    6165 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:23:51.628727    6165 sshutil.go:53] new ssh client: &{IP:localhost Port:50219 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/running-upgrade-549000/id_rsa Username:docker}
	I0906 12:23:51.668405    6165 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:23:51.670060    6165 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 12:23:51.670069    6165 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-2143/.minikube/addons for local assets ...
	I0906 12:23:51.670155    6165 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-2143/.minikube/files for local assets ...
	I0906 12:23:51.670252    6165 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem -> 26722.pem in /etc/ssl/certs
	I0906 12:23:51.670348    6165 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:23:51.674033    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem --> /etc/ssl/certs/26722.pem (1708 bytes)
	I0906 12:23:51.682233    6165 start.go:296] duration metric: took 53.567958ms for postStartSetup
	I0906 12:23:51.682262    6165 fix.go:56] duration metric: took 644.708708ms for fixHost
	I0906 12:23:51.682314    6165 main.go:141] libmachine: Using SSH client type: native
	I0906 12:23:51.682437    6165 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10056c5a0] 0x10056ee00 <nil>  [] 0s} localhost 50219 <nil> <nil>}
	I0906 12:23:51.682444    6165 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:23:51.756194    6165 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725650631.899357181
	
	I0906 12:23:51.756205    6165 fix.go:216] guest clock: 1725650631.899357181
	I0906 12:23:51.756209    6165 fix.go:229] Guest: 2024-09-06 12:23:51.899357181 -0700 PDT Remote: 2024-09-06 12:23:51.682263 -0700 PDT m=+4.605509126 (delta=217.094181ms)
	I0906 12:23:51.756221    6165 fix.go:200] guest clock delta is within tolerance: 217.094181ms
	I0906 12:23:51.756224    6165 start.go:83] releasing machines lock for "running-upgrade-549000", held for 718.699125ms
	I0906 12:23:51.756299    6165 ssh_runner.go:195] Run: cat /version.json
	I0906 12:23:51.756309    6165 sshutil.go:53] new ssh client: &{IP:localhost Port:50219 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/running-upgrade-549000/id_rsa Username:docker}
	I0906 12:23:51.756322    6165 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:23:51.756341    6165 sshutil.go:53] new ssh client: &{IP:localhost Port:50219 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/running-upgrade-549000/id_rsa Username:docker}
	W0906 12:23:51.757101    6165 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50375->127.0.0.1:50219: read: connection reset by peer
	I0906 12:23:51.757123    6165 retry.go:31] will retry after 309.490866ms: ssh: handshake failed: read tcp 127.0.0.1:50375->127.0.0.1:50219: read: connection reset by peer
	W0906 12:23:52.107073    6165 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0906 12:23:52.107155    6165 ssh_runner.go:195] Run: systemctl --version
	I0906 12:23:52.109358    6165 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:23:52.111526    6165 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:23:52.111567    6165 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0906 12:23:52.114818    6165 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0906 12:23:52.119866    6165 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:23:52.119879    6165 start.go:495] detecting cgroup driver to use...
	I0906 12:23:52.119958    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:23:52.125764    6165 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0906 12:23:52.129859    6165 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:23:52.133749    6165 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:23:52.133804    6165 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:23:52.137552    6165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:23:52.140965    6165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:23:52.144175    6165 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:23:52.147643    6165 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:23:52.150862    6165 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:23:52.154447    6165 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:23:52.158125    6165 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:23:52.161577    6165 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:23:52.165469    6165 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:23:52.168869    6165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:23:52.273599    6165 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:23:52.284327    6165 start.go:495] detecting cgroup driver to use...
	I0906 12:23:52.284416    6165 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:23:52.290670    6165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:23:52.299980    6165 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:23:52.306979    6165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:23:52.312922    6165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:23:52.318293    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:23:52.324610    6165 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:23:52.325964    6165 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:23:52.329612    6165 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0906 12:23:52.335643    6165 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:23:52.437794    6165 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:23:52.540006    6165 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:23:52.540191    6165 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:23:52.546242    6165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:23:52.646450    6165 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:24:05.930537    6165 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.284164208s)
	I0906 12:24:05.930615    6165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:24:05.936030    6165 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:24:05.944308    6165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:24:05.951394    6165 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:24:06.036061    6165 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:24:06.121867    6165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:06.192038    6165 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:24:06.198482    6165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:24:06.203331    6165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:06.271719    6165 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:24:06.311044    6165 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:24:06.311132    6165 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:24:06.314670    6165 start.go:563] Will wait 60s for crictl version
	I0906 12:24:06.314725    6165 ssh_runner.go:195] Run: which crictl
	I0906 12:24:06.316227    6165 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:24:06.327980    6165 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0906 12:24:06.328047    6165 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:24:06.340390    6165 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:24:06.357524    6165 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0906 12:24:06.357597    6165 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0906 12:24:06.358971    6165 kubeadm.go:883] updating cluster {Name:running-upgrade-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50251 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-549000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0906 12:24:06.359015    6165 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0906 12:24:06.359053    6165 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:24:06.369470    6165 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 12:24:06.369478    6165 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0906 12:24:06.369519    6165 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:24:06.372864    6165 ssh_runner.go:195] Run: which lz4
	I0906 12:24:06.374115    6165 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 12:24:06.375408    6165 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 12:24:06.375421    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0906 12:24:07.409561    6165 docker.go:649] duration metric: took 1.03547825s to copy over tarball
	I0906 12:24:07.409634    6165 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 12:24:08.597785    6165 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.188145666s)
	I0906 12:24:08.597800    6165 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 12:24:08.613707    6165 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:24:08.617186    6165 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0906 12:24:08.622515    6165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:08.695151    6165 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:24:09.911011    6165 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.215838667s)
	I0906 12:24:09.911117    6165 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:24:09.924547    6165 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 12:24:09.924556    6165 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0906 12:24:09.924562    6165 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 12:24:09.930166    6165 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:09.932648    6165 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:09.933698    6165 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:09.934001    6165 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:09.935323    6165 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:09.935371    6165 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:09.936963    6165 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:09.937037    6165 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:09.938685    6165 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:09.938697    6165 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:09.939536    6165 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:09.940088    6165 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:09.941057    6165 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:09.941083    6165 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0906 12:24:09.942069    6165 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:09.942792    6165 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0906 12:24:10.334798    6165 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:10.345756    6165 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0906 12:24:10.345780    6165 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:10.345833    6165 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:10.356324    6165 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0906 12:24:10.374043    6165 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:10.381883    6165 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:10.386165    6165 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0906 12:24:10.386188    6165 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:10.386239    6165 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:10.395589    6165 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:10.400235    6165 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0906 12:24:10.400257    6165 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:10.400259    6165 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0906 12:24:10.400303    6165 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:10.401995    6165 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:10.411625    6165 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0906 12:24:10.411648    6165 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:10.411703    6165 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:10.413198    6165 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0906 12:24:10.419681    6165 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0906 12:24:10.419708    6165 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:10.419758    6165 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:10.425133    6165 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0906 12:24:10.435691    6165 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0906 12:24:10.436326    6165 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0906 12:24:10.437422    6165 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0906 12:24:10.437510    6165 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:10.449951    6165 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0906 12:24:10.449976    6165 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0906 12:24:10.450027    6165 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0906 12:24:10.455704    6165 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0906 12:24:10.455726    6165 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:10.455785    6165 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:10.466305    6165 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0906 12:24:10.466417    6165 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0906 12:24:10.471102    6165 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0906 12:24:10.471105    6165 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0906 12:24:10.471122    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0906 12:24:10.471209    6165 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0906 12:24:10.473096    6165 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0906 12:24:10.473104    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0906 12:24:10.486155    6165 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0906 12:24:10.486169    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0906 12:24:10.536247    6165 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0906 12:24:10.536261    6165 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0906 12:24:10.536269    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0906 12:24:10.573882    6165 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0906 12:24:10.820614    6165 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0906 12:24:10.821040    6165 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:10.857434    6165 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0906 12:24:10.857487    6165 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:10.857594    6165 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:11.087995    6165 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 12:24:11.088160    6165 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0906 12:24:11.090634    6165 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0906 12:24:11.090656    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0906 12:24:11.131168    6165 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0906 12:24:11.131182    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0906 12:24:11.371384    6165 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0906 12:24:11.371422    6165 cache_images.go:92] duration metric: took 1.446862625s to LoadCachedImages
	W0906 12:24:11.371472    6165 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0906 12:24:11.371478    6165 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0906 12:24:11.371533    6165 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-549000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:24:11.371594    6165 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:24:11.385926    6165 cni.go:84] Creating CNI manager for ""
	I0906 12:24:11.385937    6165 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:24:11.385944    6165 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:24:11.385952    6165 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-549000 NodeName:running-upgrade-549000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:24:11.386021    6165 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-549000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:24:11.386075    6165 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0906 12:24:11.389327    6165 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:24:11.389357    6165 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 12:24:11.392055    6165 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0906 12:24:11.397078    6165 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:24:11.401859    6165 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0906 12:24:11.406917    6165 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0906 12:24:11.408209    6165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:11.498408    6165 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:24:11.503909    6165 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000 for IP: 10.0.2.15
	I0906 12:24:11.503920    6165 certs.go:194] generating shared ca certs ...
	I0906 12:24:11.503928    6165 certs.go:226] acquiring lock for ca certs: {Name:mkeb2acf337d35e5b807329b963b0c0723ad2fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:11.504078    6165 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key
	I0906 12:24:11.504126    6165 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key
	I0906 12:24:11.504132    6165 certs.go:256] generating profile certs ...
	I0906 12:24:11.504201    6165 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/client.key
	I0906 12:24:11.504217    6165 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.key.34e730b7
	I0906 12:24:11.504229    6165 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.crt.34e730b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0906 12:24:11.621058    6165 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.crt.34e730b7 ...
	I0906 12:24:11.621073    6165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.crt.34e730b7: {Name:mk0184798f71af68b68d83fb076cdcb179fe06d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:11.621368    6165 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.key.34e730b7 ...
	I0906 12:24:11.621380    6165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.key.34e730b7: {Name:mk89a7a9079e444d2d15d781973feed32db231ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:11.621523    6165 certs.go:381] copying /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.crt.34e730b7 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.crt
	I0906 12:24:11.621655    6165 certs.go:385] copying /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.key.34e730b7 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.key
	I0906 12:24:11.621826    6165 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/proxy-client.key
	I0906 12:24:11.621958    6165 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672.pem (1338 bytes)
	W0906 12:24:11.621986    6165 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672_empty.pem, impossibly tiny 0 bytes
	I0906 12:24:11.621992    6165 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:24:11.622017    6165 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem (1082 bytes)
	I0906 12:24:11.622042    6165 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:24:11.622067    6165 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem (1675 bytes)
	I0906 12:24:11.622120    6165 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem (1708 bytes)
	I0906 12:24:11.622456    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:24:11.636437    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 12:24:11.649276    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:24:11.656593    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 12:24:11.663405    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0906 12:24:11.670382    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:24:11.676864    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:24:11.684356    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 12:24:11.691790    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:24:11.698560    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672.pem --> /usr/share/ca-certificates/2672.pem (1338 bytes)
	I0906 12:24:11.705177    6165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem --> /usr/share/ca-certificates/26722.pem (1708 bytes)
	I0906 12:24:11.712422    6165 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:24:11.717957    6165 ssh_runner.go:195] Run: openssl version
	I0906 12:24:11.719684    6165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:24:11.722967    6165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:24:11.724345    6165 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:24:11.724363    6165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:24:11.726222    6165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:24:11.728925    6165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2672.pem && ln -fs /usr/share/ca-certificates/2672.pem /etc/ssl/certs/2672.pem"
	I0906 12:24:11.732283    6165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2672.pem
	I0906 12:24:11.733719    6165 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:44 /usr/share/ca-certificates/2672.pem
	I0906 12:24:11.733737    6165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2672.pem
	I0906 12:24:11.735449    6165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2672.pem /etc/ssl/certs/51391683.0"
	I0906 12:24:11.738210    6165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26722.pem && ln -fs /usr/share/ca-certificates/26722.pem /etc/ssl/certs/26722.pem"
	I0906 12:24:11.741251    6165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26722.pem
	I0906 12:24:11.742667    6165 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:44 /usr/share/ca-certificates/26722.pem
	I0906 12:24:11.742687    6165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26722.pem
	I0906 12:24:11.744372    6165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26722.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:24:11.747368    6165 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:24:11.748891    6165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:24:11.750916    6165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:24:11.752590    6165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:24:11.754410    6165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:24:11.756223    6165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:24:11.758066    6165 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:24:11.759799    6165 kubeadm.go:392] StartCluster: {Name:running-upgrade-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50251 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-549000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0906 12:24:11.759873    6165 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:24:11.770026    6165 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:24:11.773862    6165 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:24:11.773868    6165 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:24:11.773891    6165 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:24:11.776965    6165 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:24:11.777219    6165 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-549000" does not appear in /Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:24:11.777275    6165 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-2143/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-549000" cluster setting kubeconfig missing "running-upgrade-549000" context setting]
	I0906 12:24:11.777405    6165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/kubeconfig: {Name:mkb103f2b581179fd959f22a1dc4c9c6720f9366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:11.777825    6165 kapi.go:59] client config for running-upgrade-549000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101b27f80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:24:11.778142    6165 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:24:11.781009    6165 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-549000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0906 12:24:11.781015    6165 kubeadm.go:1160] stopping kube-system containers ...
	I0906 12:24:11.781051    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:24:11.792383    6165 docker.go:483] Stopping containers: [faa16963515f 5a415e227211 5482c1569195 af7f1d7d791c 83072e7597be 0e51623f1442 e069c433a27b 1d22e3aafbce 0bb489598d1c c921c45bc935 b4e8dbebff44 9ffdd40a7a41 48519ad4d4fa 99afd6f016ad 4cc706e9376b e1a05fb1cfe7]
	I0906 12:24:11.792445    6165 ssh_runner.go:195] Run: docker stop faa16963515f 5a415e227211 5482c1569195 af7f1d7d791c 83072e7597be 0e51623f1442 e069c433a27b 1d22e3aafbce 0bb489598d1c c921c45bc935 b4e8dbebff44 9ffdd40a7a41 48519ad4d4fa 99afd6f016ad 4cc706e9376b e1a05fb1cfe7
	I0906 12:24:11.939354    6165 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 12:24:11.995227    6165 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:24:11.998658    6165 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Sep  6 19:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep  6 19:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep  6 19:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep  6 19:23 /etc/kubernetes/scheduler.conf
	
	I0906 12:24:11.998686    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/admin.conf
	I0906 12:24:12.001552    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:24:12.001579    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 12:24:12.004923    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/kubelet.conf
	I0906 12:24:12.008575    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:24:12.008610    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 12:24:12.012146    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/controller-manager.conf
	I0906 12:24:12.018665    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:24:12.018721    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 12:24:12.023505    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/scheduler.conf
	I0906 12:24:12.026223    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:24:12.026248    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 12:24:12.029138    6165 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:24:12.032486    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:12.055138    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:12.455292    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:12.646618    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:12.681258    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:12.704479    6165 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:24:12.704554    6165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:24:13.206841    6165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:24:13.706148    6165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:24:13.710449    6165 api_server.go:72] duration metric: took 1.005978833s to wait for apiserver process to appear ...
	I0906 12:24:13.710460    6165 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:24:13.710469    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:18.712505    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:18.712531    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:23.712798    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:23.712862    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:28.713329    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:28.713365    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:33.714026    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:33.714066    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:38.714723    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:38.714818    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:43.715794    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:43.715823    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:48.716851    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:48.716872    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:53.718196    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:53.718223    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:58.719880    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:58.719898    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:03.721988    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:03.722031    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:08.724338    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:08.724360    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:13.726117    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:13.726301    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:13.744209    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:13.744287    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:13.755067    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:13.755149    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:13.765872    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:13.765943    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:13.776305    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:13.776371    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:13.786246    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:13.786305    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:13.796693    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:13.796768    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:13.807000    6165 logs.go:276] 0 containers: []
	W0906 12:25:13.807012    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:13.807073    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:13.817400    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:13.817422    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:13.817427    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:13.831347    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:13.831356    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:13.844014    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:13.844024    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:13.870725    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:13.870732    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:13.882214    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:13.882225    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:13.949411    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:13.949423    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:13.964130    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:13.964143    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:13.977048    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:13.977062    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:13.988384    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:13.988395    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:14.001154    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:14.001168    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:14.045282    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:14.045295    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:14.058668    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:14.058679    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:14.070973    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:14.070984    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:14.085078    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:14.085089    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:14.100386    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:14.100399    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:14.117677    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:14.117686    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:16.624521    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:21.626770    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:21.626923    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:21.638213    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:21.638289    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:21.648880    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:21.648956    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:21.659242    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:21.659306    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:21.670243    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:21.670314    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:21.680910    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:21.680973    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:21.691339    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:21.691401    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:21.701779    6165 logs.go:276] 0 containers: []
	W0906 12:25:21.701788    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:21.701837    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:21.712162    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:21.712186    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:21.712192    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:21.725973    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:21.725983    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:21.739949    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:21.739961    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:21.751286    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:21.751297    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:21.763110    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:21.763121    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:21.780676    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:21.780686    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:21.792118    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:21.792130    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:21.834239    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:21.834246    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:21.838977    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:21.838983    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:21.853317    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:21.853328    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:21.866108    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:21.866123    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:21.877444    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:21.877455    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:21.888943    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:21.888954    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:21.925207    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:21.925222    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:21.937157    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:21.937168    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:21.948659    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:21.948672    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:24.476773    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:29.479063    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:29.479235    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:29.502953    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:29.503069    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:29.518907    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:29.518989    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:29.531855    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:29.531935    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:29.543117    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:29.543185    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:29.553842    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:29.553906    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:29.564837    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:29.564903    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:29.575110    6165 logs.go:276] 0 containers: []
	W0906 12:25:29.575123    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:29.575176    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:29.586021    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:29.586039    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:29.586045    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:29.597517    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:29.597529    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:29.609216    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:29.609227    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:29.625037    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:29.625049    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:29.635957    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:29.635969    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:29.653720    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:29.653733    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:29.667388    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:29.667402    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:29.681654    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:29.681664    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:29.692795    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:29.692806    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:29.719476    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:29.719484    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:29.731616    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:29.731628    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:29.737228    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:29.737235    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:29.771039    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:29.771051    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:29.786728    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:29.786740    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:29.798526    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:29.798540    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:29.839845    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:29.839854    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:32.354416    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:37.356645    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:37.356898    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:37.382070    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:37.382160    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:37.399385    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:37.399465    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:37.413613    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:37.413679    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:37.429365    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:37.429437    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:37.440402    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:37.440470    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:37.453783    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:37.453855    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:37.476121    6165 logs.go:276] 0 containers: []
	W0906 12:25:37.476134    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:37.476193    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:37.487332    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:37.487352    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:37.487358    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:37.492489    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:37.492503    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:37.530979    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:37.530992    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:37.545482    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:37.545494    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:37.560286    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:37.560297    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:37.571803    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:37.571815    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:37.584102    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:37.584115    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:37.624609    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:37.624619    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:37.642138    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:37.642149    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:37.666847    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:37.666854    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:37.679430    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:37.679443    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:37.691271    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:37.691283    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:37.703388    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:37.703401    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:37.719244    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:37.719256    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:37.730762    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:37.730775    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:37.748275    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:37.748286    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:40.274717    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:45.277206    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:45.277626    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:45.318297    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:45.318439    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:45.338869    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:45.338962    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:45.353945    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:45.354023    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:45.366904    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:45.366980    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:45.378160    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:45.378228    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:45.388194    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:45.388260    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:45.398082    6165 logs.go:276] 0 containers: []
	W0906 12:25:45.398093    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:45.398155    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:45.408869    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:45.408886    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:45.408891    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:45.450228    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:45.450237    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:45.454752    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:45.454761    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:45.469679    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:45.469690    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:45.482356    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:45.482373    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:45.497516    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:45.497527    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:45.508944    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:45.508955    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:45.520706    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:45.520716    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:45.538676    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:45.538689    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:45.563898    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:45.563909    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:45.598443    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:45.598456    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:45.612841    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:45.612853    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:45.624591    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:45.624603    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:45.636812    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:45.636823    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:45.650474    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:45.650485    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:45.662741    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:45.662751    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:48.176505    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:53.178759    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:53.178939    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:53.192651    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:53.192721    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:53.207632    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:53.207697    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:53.218769    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:53.218828    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:53.229436    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:53.229505    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:53.239490    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:53.239550    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:53.249723    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:53.249791    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:53.259944    6165 logs.go:276] 0 containers: []
	W0906 12:25:53.259957    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:53.260008    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:53.270054    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:53.270072    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:53.270082    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:53.305510    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:53.305522    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:53.317809    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:53.317820    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:53.329072    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:53.329082    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:53.345810    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:53.345822    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:53.386461    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:53.386469    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:53.403856    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:53.403869    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:53.415381    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:53.415394    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:53.426276    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:53.426288    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:53.451042    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:53.451051    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:53.462612    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:53.462624    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:53.473927    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:53.473939    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:53.479266    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:53.479273    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:53.493591    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:53.493601    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:53.512656    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:53.512671    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:53.524616    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:53.524630    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:56.041381    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:01.043733    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:01.043930    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:01.060321    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:01.060405    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:01.072642    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:01.072706    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:01.083270    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:01.083335    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:01.100520    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:01.100595    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:01.111198    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:01.111271    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:01.121404    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:01.121470    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:01.131777    6165 logs.go:276] 0 containers: []
	W0906 12:26:01.131790    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:01.131843    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:01.142790    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:01.142808    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:01.142813    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:01.189301    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:01.189312    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:01.201728    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:01.201741    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:01.217436    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:01.217448    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:01.233124    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:01.233138    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:01.251432    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:01.251443    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:01.270848    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:01.270860    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:01.288201    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:01.288211    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:01.303089    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:01.303100    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:01.329332    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:01.329343    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:01.351781    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:01.351792    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:01.363352    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:01.363363    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:01.367891    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:01.367898    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:01.403862    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:01.403874    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:01.417981    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:01.417990    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:01.429324    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:01.429334    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:03.943735    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:08.946210    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:08.946560    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:08.977257    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:08.977381    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:08.996406    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:08.996496    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:09.010903    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:09.010982    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:09.023196    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:09.023269    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:09.034112    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:09.034172    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:09.044770    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:09.044837    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:09.057096    6165 logs.go:276] 0 containers: []
	W0906 12:26:09.057107    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:09.057167    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:09.067593    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:09.067611    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:09.067616    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:09.081575    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:09.081586    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:09.100262    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:09.100277    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:09.113928    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:09.113943    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:09.125181    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:09.125197    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:09.150938    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:09.150945    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:09.186431    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:09.186444    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:09.201478    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:09.201491    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:09.213138    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:09.213149    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:09.224906    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:09.224917    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:09.264841    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:09.264850    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:09.278511    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:09.278521    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:09.290765    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:09.290776    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:09.303070    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:09.303081    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:09.307680    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:09.307689    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:09.319918    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:09.319928    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:11.832687    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:16.834549    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:16.834738    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:16.857809    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:16.857925    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:16.873373    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:16.873457    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:16.885611    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:16.885679    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:16.896527    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:16.896599    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:16.906575    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:16.906647    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:16.917363    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:16.917428    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:16.927679    6165 logs.go:276] 0 containers: []
	W0906 12:26:16.927693    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:16.927753    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:16.943178    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:16.943195    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:16.943203    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:16.955228    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:16.955239    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:16.959694    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:16.959701    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:16.972031    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:16.972044    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:16.986582    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:16.986593    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:17.012816    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:17.012823    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:17.036243    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:17.036252    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:17.049254    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:17.049266    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:17.064731    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:17.064742    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:17.081713    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:17.081723    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:17.092382    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:17.092397    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:17.103849    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:17.103860    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:17.120842    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:17.120853    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:17.162191    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:17.162200    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:17.196890    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:17.196902    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:17.211431    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:17.211445    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:19.724881    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:24.727213    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:24.727389    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:24.758183    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:24.758283    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:24.774286    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:24.774358    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:24.792819    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:24.792892    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:24.803565    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:24.803634    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:24.814175    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:24.814242    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:24.826078    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:24.826142    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:24.835789    6165 logs.go:276] 0 containers: []
	W0906 12:26:24.835806    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:24.835863    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:24.850391    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:24.850409    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:24.850415    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:24.893505    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:24.893514    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:24.917951    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:24.917958    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:24.930172    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:24.930183    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:24.951068    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:24.951082    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:24.962971    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:24.962983    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:24.975427    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:24.975438    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:24.986833    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:24.986846    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:24.998180    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:24.998191    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:25.034474    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:25.034485    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:25.048026    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:25.048040    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:25.061133    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:25.061145    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:25.078062    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:25.078073    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:25.083175    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:25.083185    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:25.099990    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:25.100000    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:25.114934    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:25.114944    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:27.629136    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:32.630117    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:32.630333    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:32.650538    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:32.650632    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:32.665250    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:32.665326    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:32.677358    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:32.677428    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:32.688323    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:32.688391    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:32.699401    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:32.699472    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:32.710999    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:32.711068    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:32.721351    6165 logs.go:276] 0 containers: []
	W0906 12:26:32.721362    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:32.721412    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:32.731996    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:32.732014    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:32.732019    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:32.756819    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:32.756832    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:32.761531    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:32.761538    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:32.776118    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:32.776129    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:32.790044    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:32.790054    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:32.804555    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:32.804568    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:32.818986    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:32.818997    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:32.831204    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:32.831218    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:32.865936    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:32.865945    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:32.879252    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:32.879264    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:32.891018    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:32.891029    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:32.911389    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:32.911399    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:32.922984    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:32.922994    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:32.934583    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:32.934595    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:32.977095    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:32.977107    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:32.989824    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:32.989837    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:35.503448    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:40.505847    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:40.506041    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:40.524432    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:40.524521    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:40.537791    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:40.537868    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:40.548797    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:40.548865    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:40.559385    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:40.559452    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:40.570376    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:40.570439    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:40.581021    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:40.581091    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:40.591366    6165 logs.go:276] 0 containers: []
	W0906 12:26:40.591378    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:40.591436    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:40.601731    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:40.601749    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:40.601755    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:40.614034    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:40.614048    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:40.625545    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:40.625557    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:40.637411    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:40.637423    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:40.654492    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:40.654504    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:40.669083    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:40.669096    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:40.680843    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:40.680856    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:40.695330    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:40.695341    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:40.735987    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:40.735997    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:40.749952    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:40.749962    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:40.764510    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:40.764521    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:40.788168    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:40.788174    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:40.792229    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:40.792235    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:40.826348    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:40.826361    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:40.840563    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:40.840576    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:40.856149    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:40.856164    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:43.375753    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:48.378367    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:48.378546    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:48.397254    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:48.397343    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:48.411237    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:48.411318    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:48.422306    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:48.422371    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:48.432592    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:48.432665    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:48.443118    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:48.443189    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:48.453473    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:48.453543    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:48.463473    6165 logs.go:276] 0 containers: []
	W0906 12:26:48.463484    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:48.463544    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:48.475654    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:48.475672    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:48.475678    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:48.489977    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:48.489987    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:48.501544    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:48.501555    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:48.512492    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:48.512506    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:48.523608    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:48.523619    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:48.565861    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:48.565878    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:48.570948    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:48.570954    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:48.584613    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:48.584624    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:48.598475    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:48.598488    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:48.611329    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:48.611342    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:48.645988    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:48.646002    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:48.658000    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:48.658014    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:48.676154    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:48.676166    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:48.701631    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:48.701641    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:48.715402    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:48.715411    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:48.726839    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:48.726850    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:51.240482    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:56.242767    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:56.243100    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:56.273702    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:56.273828    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:56.293243    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:56.293349    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:56.307883    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:56.307952    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:56.320668    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:56.320742    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:56.331300    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:56.331358    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:56.342223    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:56.342289    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:56.352860    6165 logs.go:276] 0 containers: []
	W0906 12:26:56.352871    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:56.352931    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:56.363461    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:56.363481    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:56.363487    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:56.374936    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:56.374946    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:56.387594    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:56.387605    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:56.399935    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:56.399949    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:56.416677    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:56.416688    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:56.427941    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:56.427957    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:56.452983    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:56.452993    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:56.457177    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:56.457183    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:56.492716    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:56.492728    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:56.504691    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:56.504706    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:56.516341    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:56.516351    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:56.530359    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:56.530371    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:56.543208    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:56.543221    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:56.557452    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:56.557463    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:56.599003    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:56.599016    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:56.613108    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:56.613121    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:59.126502    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:04.127624    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:04.127732    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:04.139462    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:04.139540    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:04.151000    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:04.151075    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:04.161529    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:04.161595    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:04.172094    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:04.172163    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:04.183416    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:04.183485    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:04.194321    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:04.194386    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:04.204785    6165 logs.go:276] 0 containers: []
	W0906 12:27:04.204797    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:04.204851    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:04.214993    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:04.215025    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:04.215032    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:04.228561    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:04.228572    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:04.247709    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:04.247720    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:04.261724    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:04.261734    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:04.302555    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:04.302565    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:04.342542    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:04.342553    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:04.354679    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:04.354690    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:04.366636    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:04.366647    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:04.380829    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:04.380839    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:04.392223    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:04.392234    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:04.417755    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:04.417762    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:04.422562    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:04.422569    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:04.437043    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:04.437054    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:04.448864    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:04.448874    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:04.466457    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:04.466470    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:04.477698    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:04.477712    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:06.992246    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:11.994283    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:11.994578    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:12.026094    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:12.026225    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:12.046733    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:12.046820    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:12.067075    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:12.067148    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:12.083185    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:12.083260    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:12.103742    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:12.103815    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:12.126499    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:12.126570    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:12.138062    6165 logs.go:276] 0 containers: []
	W0906 12:27:12.138077    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:12.138137    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:12.148999    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:12.149018    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:12.149024    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:12.184422    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:12.184435    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:12.196927    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:12.196944    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:12.209757    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:12.209768    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:12.214321    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:12.214328    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:12.233254    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:12.233265    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:12.251661    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:12.251671    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:12.275964    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:12.275972    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:12.318487    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:12.318503    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:12.331142    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:12.331155    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:12.345270    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:12.345282    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:12.356158    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:12.356169    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:12.367171    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:12.367185    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:12.380903    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:12.380917    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:12.394896    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:12.394909    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:12.406679    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:12.406693    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:14.921125    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:19.923871    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:19.924225    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:19.959644    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:19.959780    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:19.980436    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:19.980546    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:19.995747    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:19.995817    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:20.008964    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:20.009030    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:20.020355    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:20.020421    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:20.031597    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:20.031659    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:20.042178    6165 logs.go:276] 0 containers: []
	W0906 12:27:20.042189    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:20.042239    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:20.054219    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:20.054236    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:20.054242    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:20.071697    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:20.071707    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:20.086545    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:20.086555    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:20.098331    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:20.098342    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:20.109884    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:20.109898    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:20.121691    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:20.121704    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:20.156322    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:20.156335    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:20.171703    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:20.171714    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:20.196960    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:20.196970    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:20.239155    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:20.239164    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:20.250957    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:20.250969    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:20.268060    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:20.268072    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:20.280018    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:20.280030    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:20.291169    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:20.291184    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:20.302695    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:20.302708    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:20.307603    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:20.307611    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:22.821790    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:27.824155    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:27.824599    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:27.867355    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:27.867487    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:27.888291    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:27.888389    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:27.903091    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:27.903166    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:27.915625    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:27.915702    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:27.925934    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:27.925995    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:27.936689    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:27.936758    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:27.947054    6165 logs.go:276] 0 containers: []
	W0906 12:27:27.947065    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:27.947129    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:27.958767    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:27.958786    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:27.958791    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:28.002251    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:28.002278    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:28.036818    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:28.036834    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:28.048789    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:28.048805    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:28.062238    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:28.062250    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:28.079790    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:28.079802    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:28.084117    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:28.084126    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:28.095574    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:28.095587    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:28.107363    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:28.107373    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:28.121436    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:28.121447    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:28.133400    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:28.133413    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:28.144396    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:28.144410    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:28.168264    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:28.168274    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:28.183462    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:28.183478    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:28.198570    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:28.198586    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:28.213967    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:28.213979    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:30.727876    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:35.729391    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:35.729591    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:35.751455    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:35.751581    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:35.768170    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:35.768255    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:35.782955    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:35.783050    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:35.797428    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:35.797496    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:35.807866    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:35.807927    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:35.818526    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:35.818586    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:35.828766    6165 logs.go:276] 0 containers: []
	W0906 12:27:35.828779    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:35.828831    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:35.844173    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:35.844198    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:35.844203    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:35.858457    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:35.858468    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:35.875672    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:35.875682    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:35.891221    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:35.891232    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:35.927599    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:35.927609    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:35.938986    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:35.938998    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:35.951346    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:35.951356    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:35.955849    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:35.955856    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:35.972119    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:35.972133    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:35.983894    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:35.983908    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:36.008827    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:36.008842    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:36.022710    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:36.022720    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:36.037289    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:36.037299    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:36.048884    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:36.048898    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:36.061653    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:36.061663    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:36.074464    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:36.074476    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:38.619774    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:43.622007    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:43.622225    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:43.660716    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:43.660812    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:43.675872    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:43.675949    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:43.687793    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:43.687856    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:43.698493    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:43.698562    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:43.709065    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:43.709136    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:43.719518    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:43.719587    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:43.732828    6165 logs.go:276] 0 containers: []
	W0906 12:27:43.732839    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:43.732897    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:43.742805    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:43.742828    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:43.742834    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:43.754037    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:43.754049    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:43.766818    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:43.766828    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:43.778544    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:43.778555    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:43.822245    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:43.822253    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:43.826822    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:43.826831    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:43.844833    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:43.844844    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:43.856038    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:43.856052    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:43.879968    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:43.879976    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:43.918429    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:43.918442    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:43.932583    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:43.932594    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:43.944499    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:43.944510    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:43.955986    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:43.955997    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:43.969262    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:43.969275    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:43.981531    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:43.981541    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:44.003296    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:44.003313    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:46.519684    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:51.522428    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:51.522617    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:51.541224    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:51.541308    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:51.554851    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:51.554929    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:51.565899    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:51.565970    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:51.577012    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:51.577087    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:51.587312    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:51.587383    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:51.598306    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:51.598371    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:51.608657    6165 logs.go:276] 0 containers: []
	W0906 12:27:51.608667    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:51.608721    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:51.619031    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:51.619049    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:51.619056    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:51.631042    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:51.631054    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:51.643381    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:51.643391    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:51.654838    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:51.654849    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:51.668656    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:51.668670    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:51.687352    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:51.687367    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:51.699331    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:51.699342    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:51.735773    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:51.735786    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:51.747580    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:51.747595    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:51.762162    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:51.762174    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:51.776728    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:51.776739    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:51.788055    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:51.788070    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:51.805814    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:51.805830    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:51.817247    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:51.817259    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:51.839948    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:51.839956    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:51.880078    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:51.880089    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:54.386285    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:59.389079    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:59.389460    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:59.429566    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:59.429698    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:59.459058    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:59.459159    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:59.475910    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:59.475984    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:59.488288    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:59.488362    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:59.498758    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:59.498820    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:59.509725    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:59.509796    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:59.519989    6165 logs.go:276] 0 containers: []
	W0906 12:27:59.520000    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:59.520057    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:59.530533    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:59.530553    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:59.530559    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:59.544470    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:59.544487    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:59.556824    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:59.556835    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:59.569151    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:59.569165    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:59.580836    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:59.580847    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:59.592363    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:59.592373    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:59.606201    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:59.606215    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:59.620728    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:59.620738    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:59.643738    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:59.643747    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:59.657173    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:59.657187    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:59.700098    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:59.700110    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:59.704353    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:59.704359    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:59.720709    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:59.720720    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:59.732210    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:59.732219    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:59.751939    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:59.751953    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:59.785985    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:59.785996    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:28:02.308397    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:07.311065    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:07.311432    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:07.354117    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:28:07.354244    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:07.374981    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:28:07.375067    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:07.389205    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:28:07.389278    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:07.401407    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:28:07.401475    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:07.411857    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:28:07.411919    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:07.422054    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:28:07.422118    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:07.432260    6165 logs.go:276] 0 containers: []
	W0906 12:28:07.432271    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:07.432322    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:07.443467    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:28:07.443484    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:28:07.443489    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:28:07.457563    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:28:07.457574    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:28:07.470041    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:28:07.470051    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:28:07.481341    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:28:07.481357    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:28:07.492937    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:28:07.492947    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:28:07.505790    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:28:07.505803    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:28:07.525985    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:07.525995    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:07.567091    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:28:07.567102    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:28:07.582348    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:28:07.582362    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:28:07.597091    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:28:07.597101    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:07.609154    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:07.609165    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:07.613929    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:28:07.613935    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:28:07.625264    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:07.625274    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:07.648923    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:07.648931    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:07.683972    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:28:07.683986    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:28:07.696470    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:28:07.696480    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:28:10.210341    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:15.210949    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:15.211083    6165 kubeadm.go:597] duration metric: took 4m3.438961209s to restartPrimaryControlPlane
	W0906 12:28:15.211221    6165 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 12:28:15.211284    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 12:28:16.255514    6165 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.044222125s)
	I0906 12:28:16.255580    6165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:28:16.260545    6165 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:28:16.263289    6165 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:28:16.266999    6165 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:28:16.267007    6165 kubeadm.go:157] found existing configuration files:
	
	I0906 12:28:16.267033    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/admin.conf
	I0906 12:28:16.270164    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:28:16.270190    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 12:28:16.273181    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/kubelet.conf
	I0906 12:28:16.275746    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:28:16.275767    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 12:28:16.278541    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/controller-manager.conf
	I0906 12:28:16.281406    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:28:16.281425    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 12:28:16.283884    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/scheduler.conf
	I0906 12:28:16.286450    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:28:16.286471    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 12:28:16.289511    6165 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 12:28:16.307848    6165 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0906 12:28:16.308004    6165 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 12:28:16.355359    6165 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 12:28:16.355413    6165 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 12:28:16.355463    6165 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 12:28:16.405421    6165 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 12:28:16.409999    6165 out.go:235]   - Generating certificates and keys ...
	I0906 12:28:16.410030    6165 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 12:28:16.410055    6165 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 12:28:16.410086    6165 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 12:28:16.410128    6165 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 12:28:16.410162    6165 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 12:28:16.410231    6165 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 12:28:16.410284    6165 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 12:28:16.410320    6165 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 12:28:16.410360    6165 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 12:28:16.410398    6165 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 12:28:16.410418    6165 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 12:28:16.410446    6165 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 12:28:16.475440    6165 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 12:28:16.526977    6165 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 12:28:16.558166    6165 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 12:28:16.649479    6165 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 12:28:16.677194    6165 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 12:28:16.677613    6165 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 12:28:16.677683    6165 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 12:28:16.759940    6165 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 12:28:16.764170    6165 out.go:235]   - Booting up control plane ...
	I0906 12:28:16.764226    6165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 12:28:16.764264    6165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 12:28:16.764297    6165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 12:28:16.767829    6165 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 12:28:16.768730    6165 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 12:28:20.770543    6165 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001661 seconds
	I0906 12:28:20.770603    6165 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 12:28:20.773847    6165 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 12:28:21.283733    6165 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 12:28:21.283876    6165 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-549000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 12:28:21.787552    6165 kubeadm.go:310] [bootstrap-token] Using token: utk6ba.0o0w4nted8qb1736
	I0906 12:28:21.792070    6165 out.go:235]   - Configuring RBAC rules ...
	I0906 12:28:21.792133    6165 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 12:28:21.800621    6165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 12:28:21.802819    6165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 12:28:21.803805    6165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 12:28:21.804625    6165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 12:28:21.805689    6165 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 12:28:21.808863    6165 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 12:28:22.010629    6165 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 12:28:22.202245    6165 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 12:28:22.202720    6165 kubeadm.go:310] 
	I0906 12:28:22.202753    6165 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 12:28:22.202757    6165 kubeadm.go:310] 
	I0906 12:28:22.202794    6165 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 12:28:22.202797    6165 kubeadm.go:310] 
	I0906 12:28:22.202809    6165 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 12:28:22.202837    6165 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 12:28:22.202867    6165 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 12:28:22.202869    6165 kubeadm.go:310] 
	I0906 12:28:22.202896    6165 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 12:28:22.202898    6165 kubeadm.go:310] 
	I0906 12:28:22.202924    6165 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 12:28:22.202927    6165 kubeadm.go:310] 
	I0906 12:28:22.202953    6165 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 12:28:22.202990    6165 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 12:28:22.203040    6165 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 12:28:22.203044    6165 kubeadm.go:310] 
	I0906 12:28:22.203084    6165 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 12:28:22.203122    6165 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 12:28:22.203127    6165 kubeadm.go:310] 
	I0906 12:28:22.203168    6165 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token utk6ba.0o0w4nted8qb1736 \
	I0906 12:28:22.203235    6165 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59dd8c6c8a0580995b4e71517efb0052c1c9fa2a3d3304f8b7b5bc84a0bff0c5 \
	I0906 12:28:22.203259    6165 kubeadm.go:310] 	--control-plane 
	I0906 12:28:22.203264    6165 kubeadm.go:310] 
	I0906 12:28:22.203307    6165 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 12:28:22.203310    6165 kubeadm.go:310] 
	I0906 12:28:22.203362    6165 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token utk6ba.0o0w4nted8qb1736 \
	I0906 12:28:22.203414    6165 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59dd8c6c8a0580995b4e71517efb0052c1c9fa2a3d3304f8b7b5bc84a0bff0c5 
	I0906 12:28:22.203473    6165 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 12:28:22.203534    6165 cni.go:84] Creating CNI manager for ""
	I0906 12:28:22.203543    6165 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:28:22.207152    6165 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 12:28:22.214399    6165 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 12:28:22.217488    6165 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 12:28:22.222375    6165 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 12:28:22.222412    6165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:28:22.222456    6165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-549000 minikube.k8s.io/updated_at=2024_09_06T12_28_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=running-upgrade-549000 minikube.k8s.io/primary=true
	I0906 12:28:22.259452    6165 ops.go:34] apiserver oom_adj: -16
	I0906 12:28:22.259581    6165 kubeadm.go:1113] duration metric: took 37.20025ms to wait for elevateKubeSystemPrivileges
	I0906 12:28:22.272091    6165 kubeadm.go:394] duration metric: took 4m10.514098792s to StartCluster
	I0906 12:28:22.272109    6165 settings.go:142] acquiring lock: {Name:mk12afd771d0c660db2e89d96a6968c1a28fb2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:22.272202    6165 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:28:22.273544    6165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/kubeconfig: {Name:mkb103f2b581179fd959f22a1dc4c9c6720f9366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:22.273793    6165 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:22.273810    6165 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:28:22.273859    6165 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-549000"
	I0906 12:28:22.273873    6165 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-549000"
	I0906 12:28:22.273876    6165 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-549000"
	I0906 12:28:22.273889    6165 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-549000"
	W0906 12:28:22.273893    6165 addons.go:243] addon storage-provisioner should already be in state true
	I0906 12:28:22.273902    6165 config.go:182] Loaded profile config "running-upgrade-549000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0906 12:28:22.273911    6165 host.go:66] Checking if "running-upgrade-549000" exists ...
	I0906 12:28:22.274725    6165 kapi.go:59] client config for running-upgrade-549000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101b27f80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:28:22.274852    6165 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-549000"
	W0906 12:28:22.274856    6165 addons.go:243] addon default-storageclass should already be in state true
	I0906 12:28:22.274865    6165 host.go:66] Checking if "running-upgrade-549000" exists ...
	I0906 12:28:22.278415    6165 out.go:177] * Verifying Kubernetes components...
	I0906 12:28:22.278787    6165 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 12:28:22.278792    6165 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 12:28:22.278797    6165 sshutil.go:53] new ssh client: &{IP:localhost Port:50219 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/running-upgrade-549000/id_rsa Username:docker}
	I0906 12:28:22.286273    6165 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:28:22.290371    6165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:28:22.294383    6165 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:28:22.294390    6165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 12:28:22.294396    6165 sshutil.go:53] new ssh client: &{IP:localhost Port:50219 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/running-upgrade-549000/id_rsa Username:docker}
	I0906 12:28:22.366386    6165 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:28:22.371589    6165 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:28:22.371635    6165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:28:22.376609    6165 api_server.go:72] duration metric: took 102.802334ms to wait for apiserver process to appear ...
	I0906 12:28:22.376616    6165 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:28:22.376623    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:22.382773    6165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 12:28:22.405738    6165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:28:22.746441    6165 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:28:22.746455    6165 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:28:27.378913    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:27.378995    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:32.380021    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:32.380061    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:37.380621    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:37.380658    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:42.381726    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:42.381767    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:47.382820    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:47.382883    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:52.383539    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:52.383558    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0906 12:28:52.748298    6165 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0906 12:28:52.752353    6165 out.go:177] * Enabled addons: storage-provisioner
	I0906 12:28:52.760521    6165 addons.go:510] duration metric: took 30.486935167s for enable addons: enabled=[storage-provisioner]
	I0906 12:28:57.384991    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:57.385043    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:02.386945    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:02.386969    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:07.389117    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:07.389147    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:12.391328    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:12.391349    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:17.393474    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:17.393499    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:22.395659    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:22.395801    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:22.407474    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:29:22.407544    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:22.418597    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:29:22.418669    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:22.429202    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:29:22.429271    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:22.439236    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:29:22.439309    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:29:22.449906    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:29:22.449978    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:29:22.461027    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:29:22.461094    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:29:22.472625    6165 logs.go:276] 0 containers: []
	W0906 12:29:22.472637    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:29:22.472695    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:29:22.482948    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:29:22.482962    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:29:22.482967    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:29:22.497121    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:29:22.497135    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:29:22.512327    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:29:22.512341    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:29:22.529647    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:29:22.529663    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:29:22.542753    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:29:22.542764    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:29:22.577081    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:29:22.577091    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:29:22.611574    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:29:22.611586    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:29:22.623878    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:29:22.623889    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:29:22.641020    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:29:22.641034    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:29:22.653347    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:29:22.653362    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:29:22.671842    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:29:22.671854    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:29:22.695144    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:29:22.695153    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:29:22.699418    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:29:22.699428    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:29:25.225827    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:30.228126    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:30.228269    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:30.244065    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:29:30.244141    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:30.256957    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:29:30.257018    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:30.272171    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:29:30.272247    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:30.282767    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:29:30.282830    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:29:30.293230    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:29:30.293304    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:29:30.303782    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:29:30.303846    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:29:30.314291    6165 logs.go:276] 0 containers: []
	W0906 12:29:30.314354    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:29:30.314431    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:29:30.326182    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:29:30.326195    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:29:30.326201    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:29:30.338699    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:29:30.338711    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:29:30.363412    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:29:30.363429    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:29:30.398287    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:29:30.398303    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:29:30.413062    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:29:30.413072    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:29:30.427728    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:29:30.427744    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:29:30.439936    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:29:30.439947    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:29:30.451740    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:29:30.451750    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:29:30.468507    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:29:30.468519    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:29:30.473283    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:29:30.473291    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:29:30.509494    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:29:30.509507    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:29:30.521609    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:29:30.521619    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:29:30.540534    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:29:30.540545    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:29:33.054611    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:38.056879    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:38.057065    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:38.075040    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:29:38.075123    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:38.088041    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:29:38.088113    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:38.100654    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:29:38.100721    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:38.111309    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:29:38.111385    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:29:38.121929    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:29:38.121999    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:29:38.134670    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:29:38.134735    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:29:38.145278    6165 logs.go:276] 0 containers: []
	W0906 12:29:38.145290    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:29:38.145346    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:29:38.155472    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:29:38.155486    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:29:38.155491    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:29:38.170743    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:29:38.170757    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:29:38.186354    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:29:38.186367    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:29:38.221713    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:29:38.221721    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:29:38.225928    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:29:38.225936    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:29:38.262442    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:29:38.262454    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:29:38.277238    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:29:38.277249    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:29:38.288715    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:29:38.288728    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:29:38.302388    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:29:38.302399    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:29:38.327567    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:29:38.327578    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:29:38.338826    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:29:38.338840    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:29:38.355660    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:29:38.355673    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:29:38.374963    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:29:38.374973    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:29:40.890961    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:45.892971    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:45.893176    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:45.918846    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:29:45.918956    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:45.936790    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:29:45.936872    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:45.950160    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:29:45.950231    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:45.962120    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:29:45.962187    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:29:45.973121    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:29:45.973182    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:29:45.983834    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:29:45.983888    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:29:45.993900    6165 logs.go:276] 0 containers: []
	W0906 12:29:45.993913    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:29:45.993966    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:29:46.005167    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:29:46.005183    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:29:46.005188    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:29:46.029283    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:29:46.029293    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:29:46.064754    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:29:46.064765    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:29:46.078633    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:29:46.078643    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:29:46.096977    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:29:46.096990    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:29:46.110374    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:29:46.110387    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:29:46.125557    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:29:46.125567    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:29:46.142765    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:29:46.142777    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:29:46.154233    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:29:46.154247    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:29:46.165483    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:29:46.165493    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:29:46.170518    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:29:46.170526    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:29:46.204875    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:29:46.204888    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:29:46.223429    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:29:46.223444    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:29:48.737043    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:53.739167    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:53.739270    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:53.750373    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:29:53.750444    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:53.760919    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:29:53.760986    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:53.771382    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:29:53.771443    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:53.782377    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:29:53.782448    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:29:53.793465    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:29:53.793537    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:29:53.804931    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:29:53.804991    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:29:53.815400    6165 logs.go:276] 0 containers: []
	W0906 12:29:53.815411    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:29:53.815470    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:29:53.826087    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:29:53.826103    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:29:53.826108    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:29:53.837725    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:29:53.837736    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:29:53.863258    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:29:53.863267    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:29:53.867464    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:29:53.867472    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:29:53.881613    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:29:53.881626    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:29:53.892892    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:29:53.892905    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:29:53.904575    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:29:53.904588    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:29:53.919314    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:29:53.919326    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:29:53.938609    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:29:53.938619    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:29:53.950090    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:29:53.950104    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:29:53.983646    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:29:53.983655    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:29:54.019402    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:29:54.019412    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:29:54.034299    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:29:54.034311    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:29:56.551827    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:01.554029    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:01.554207    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:01.572438    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:01.572525    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:01.585957    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:01.586026    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:01.597677    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:30:01.597747    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:01.608268    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:01.608326    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:01.618846    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:01.618920    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:01.629571    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:01.629628    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:01.639733    6165 logs.go:276] 0 containers: []
	W0906 12:30:01.639744    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:01.639801    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:01.650592    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:01.650608    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:01.650614    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:01.655548    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:01.655555    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:01.690564    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:01.690576    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:01.702822    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:01.702834    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:01.717386    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:01.717398    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:01.729281    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:01.729291    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:01.740793    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:01.740806    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:01.766374    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:01.766390    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:01.800981    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:01.801000    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:01.818778    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:01.818789    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:01.832864    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:01.832874    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:01.847398    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:01.847411    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:01.864695    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:01.864708    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:04.380700    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:09.381678    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:09.381933    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:09.408416    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:09.408520    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:09.425645    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:09.425729    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:09.439563    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:30:09.439627    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:09.451199    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:09.451267    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:09.462228    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:09.462295    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:09.472553    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:09.472612    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:09.482618    6165 logs.go:276] 0 containers: []
	W0906 12:30:09.482627    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:09.482672    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:09.493045    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:09.493062    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:09.493067    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:09.507137    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:09.507150    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:09.521458    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:09.521469    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:09.533578    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:09.533590    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:09.545881    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:09.545892    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:09.563197    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:09.563211    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:09.575085    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:09.575095    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:09.600191    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:09.600201    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:09.635680    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:09.635694    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:09.647297    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:09.647308    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:09.652057    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:09.652068    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:09.673294    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:09.673305    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:09.685406    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:09.685417    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:12.221103    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:17.223244    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:17.223334    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:17.239259    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:17.239348    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:17.249719    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:17.249783    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:17.260164    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:30:17.260233    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:17.271440    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:17.271510    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:17.282061    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:17.282131    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:17.292571    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:17.292633    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:17.303433    6165 logs.go:276] 0 containers: []
	W0906 12:30:17.303443    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:17.303501    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:17.314061    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:17.314076    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:17.314082    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:17.318933    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:17.318940    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:17.330885    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:17.330895    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:17.342277    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:17.342292    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:17.361851    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:17.361861    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:17.387257    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:17.387266    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:17.398748    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:17.398759    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:17.410500    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:17.410510    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:17.446401    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:17.446415    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:17.483494    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:17.483506    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:17.497850    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:17.497860    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:17.513124    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:17.513135    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:17.525526    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:17.525536    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:20.042509    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:25.044814    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:25.044981    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:25.060857    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:25.060935    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:25.073822    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:25.073895    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:25.088170    6165 logs.go:276] 3 containers: [c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:30:25.088243    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:25.099502    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:25.099576    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:25.109928    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:25.109991    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:25.120764    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:25.120830    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:25.131126    6165 logs.go:276] 0 containers: []
	W0906 12:30:25.131140    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:25.131196    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:25.141455    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:25.141472    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:25.141477    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:25.152819    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:25.152833    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:25.167264    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:25.167274    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:25.202724    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:25.202734    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:25.214518    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:25.214531    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:25.240223    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:25.240234    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:25.252816    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:25.252830    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:25.272728    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:25.272740    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:25.285012    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:25.285024    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:25.302612    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:25.302625    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:25.307223    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:30:25.307228    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:30:25.321420    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:25.321433    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:25.333455    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:25.333470    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:25.351460    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:25.351473    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:27.885647    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:32.887979    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:32.888159    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:32.908833    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:32.908941    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:32.925331    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:32.925409    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:32.938233    6165 logs.go:276] 3 containers: [c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:30:32.938302    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:32.949513    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:32.949574    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:32.960348    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:32.960412    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:32.970673    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:32.970729    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:32.981243    6165 logs.go:276] 0 containers: []
	W0906 12:30:32.981255    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:32.981300    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:32.991722    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:32.991740    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:32.991745    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:32.996308    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:32.996316    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:33.032000    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:33.032012    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:33.047239    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:33.047249    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:33.072063    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:33.072072    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:33.106733    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:33.106745    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:33.119149    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:33.119159    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:33.130869    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:30:33.130880    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:30:33.145822    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:33.145833    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:33.157340    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:33.157355    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:33.171995    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:33.172007    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:33.183801    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:33.183811    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:33.197758    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:33.197766    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:33.215264    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:33.215274    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:35.729430    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:40.731631    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:40.731816    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:40.759069    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:40.759157    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:40.773327    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:40.773402    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:40.785669    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:30:40.785733    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:40.796578    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:40.796639    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:40.806637    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:40.806693    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:40.817507    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:40.817563    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:40.828014    6165 logs.go:276] 0 containers: []
	W0906 12:30:40.828024    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:40.828072    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:40.837954    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:40.837973    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:40.837979    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:40.854233    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:30:40.854243    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:30:40.865660    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:40.865670    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:40.891342    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:40.891349    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:40.925221    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:30:40.925231    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:30:40.937179    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:40.937192    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:40.954803    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:40.954814    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:40.966434    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:40.966444    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:40.977746    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:40.977755    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:40.990228    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:40.990239    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:41.002670    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:41.002679    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:41.017607    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:41.017618    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:41.029427    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:41.029437    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:41.034318    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:41.034325    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:41.071515    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:41.071527    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:43.587466    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:48.589692    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:48.589834    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:48.605476    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:48.605543    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:48.618650    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:48.618722    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:48.630061    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:30:48.630123    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:48.644047    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:48.644114    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:48.654970    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:48.655036    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:48.665161    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:48.665218    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:48.675232    6165 logs.go:276] 0 containers: []
	W0906 12:30:48.675243    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:48.675298    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:48.685627    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:48.685644    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:48.685649    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:48.702198    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:48.702208    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:48.713879    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:48.713888    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:48.732735    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:30:48.732747    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:30:48.744824    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:48.744835    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:48.757191    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:48.757202    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:48.772257    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:48.772267    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:48.788216    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:48.788227    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:48.811670    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:48.811683    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:48.845734    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:48.845754    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:48.854009    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:48.854021    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:48.870662    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:48.870673    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:48.904368    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:48.904380    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:48.916513    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:30:48.916526    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:30:48.927894    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:48.927904    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:51.442213    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:56.444497    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:56.444690    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:56.466412    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:56.466512    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:56.482325    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:56.482408    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:56.494582    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:30:56.494646    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:56.506304    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:56.506364    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:56.517129    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:56.517207    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:56.531414    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:56.531483    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:56.541380    6165 logs.go:276] 0 containers: []
	W0906 12:30:56.541393    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:56.541461    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:56.551927    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:56.551946    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:56.551952    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:56.578071    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:56.578080    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:56.589799    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:56.589809    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:56.604225    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:30:56.604238    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:30:56.616179    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:56.616190    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:56.628430    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:56.628440    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:56.663724    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:56.663734    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:56.675443    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:56.675455    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:56.689524    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:30:56.689535    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:30:56.701147    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:56.701157    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:56.712998    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:56.713011    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:56.731075    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:56.731085    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:56.742258    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:56.742271    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:56.746975    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:56.746982    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:56.780780    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:56.780793    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:59.297571    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:04.299806    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:04.299929    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:04.312137    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:04.312201    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:04.324376    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:04.324445    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:04.335943    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:04.336036    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:04.347121    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:04.347203    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:04.358964    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:04.359034    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:04.370825    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:04.370892    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:04.382113    6165 logs.go:276] 0 containers: []
	W0906 12:31:04.382125    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:04.382181    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:04.393407    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:04.393446    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:04.393454    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:04.408589    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:04.408604    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:04.421887    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:04.421900    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:04.434755    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:04.434769    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:04.469994    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:04.470006    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:04.482208    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:04.482221    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:04.499922    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:04.499932    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:04.524034    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:04.524044    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:04.528563    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:04.528572    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:04.540596    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:04.540606    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:04.552509    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:04.552520    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:04.586102    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:04.586113    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:04.600290    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:04.600302    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:04.614898    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:04.614927    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:04.629389    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:04.629402    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:07.143718    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:12.145968    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:12.146119    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:12.159045    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:12.159122    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:12.170623    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:12.170698    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:12.181128    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:12.181196    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:12.191893    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:12.191963    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:12.202134    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:12.202207    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:12.213361    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:12.213433    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:12.223972    6165 logs.go:276] 0 containers: []
	W0906 12:31:12.223983    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:12.224043    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:12.234823    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:12.234841    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:12.234846    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:12.246287    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:12.246300    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:12.284714    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:12.284728    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:12.299868    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:12.299882    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:12.311900    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:12.311909    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:12.323675    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:12.323684    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:12.360731    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:12.360747    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:12.375371    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:12.375382    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:12.387443    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:12.387454    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:12.399251    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:12.399265    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:12.410716    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:12.410727    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:12.428252    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:12.428266    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:12.432766    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:12.432775    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:12.446652    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:12.446665    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:12.461175    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:12.461189    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:14.987470    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:19.988222    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:19.988315    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:19.999912    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:19.999988    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:20.011268    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:20.011337    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:20.021998    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:20.022071    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:20.032425    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:20.032495    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:20.044233    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:20.044299    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:20.054419    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:20.054483    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:20.065061    6165 logs.go:276] 0 containers: []
	W0906 12:31:20.065071    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:20.065121    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:20.075488    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:20.075504    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:20.075509    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:20.108498    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:20.108510    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:20.122902    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:20.122914    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:20.134770    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:20.134781    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:20.146778    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:20.146791    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:20.158704    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:20.158714    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:20.170447    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:20.170457    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:20.207638    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:20.207650    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:20.226419    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:20.226433    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:20.242371    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:20.242384    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:20.254076    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:20.254090    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:20.272672    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:20.272685    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:20.290635    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:20.290647    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:20.305676    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:20.305688    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:20.310562    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:20.310572    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:22.834591    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:27.836941    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:27.837084    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:27.849073    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:27.849153    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:27.859756    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:27.859838    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:27.870157    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:27.870246    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:27.881127    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:27.881195    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:27.891465    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:27.891528    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:27.902403    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:27.902471    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:27.912900    6165 logs.go:276] 0 containers: []
	W0906 12:31:27.912910    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:27.912968    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:27.923549    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:27.923569    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:27.923575    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:27.962977    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:27.962989    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:27.976808    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:27.976818    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:28.000638    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:28.000649    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:28.012281    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:28.012292    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:28.026884    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:28.026895    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:28.038545    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:28.038557    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:28.050746    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:28.050757    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:28.065653    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:28.065666    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:28.077575    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:28.077585    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:28.095269    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:28.095279    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:28.106996    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:28.107010    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:28.118598    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:28.118609    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:28.130285    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:28.130299    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:28.164787    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:28.164798    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:30.671569    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:35.673803    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:35.673917    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:35.686429    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:35.686503    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:35.698003    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:35.698070    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:35.709038    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:35.709113    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:35.720173    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:35.720244    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:35.730779    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:35.730846    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:35.741453    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:35.741519    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:35.751600    6165 logs.go:276] 0 containers: []
	W0906 12:31:35.751609    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:35.751666    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:35.763871    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:35.763890    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:35.763896    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:35.799719    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:35.799730    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:35.823320    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:35.823330    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:35.835596    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:35.835610    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:35.860602    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:35.860612    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:35.865235    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:35.865243    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:35.884631    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:35.884646    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:35.901563    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:35.901576    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:35.917637    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:35.917649    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:35.932800    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:35.932811    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:35.944518    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:35.944528    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:35.978414    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:35.978427    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:35.995370    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:35.995383    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:36.009398    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:36.009410    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:36.021902    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:36.021912    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:38.536821    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:43.539032    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:43.539224    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:43.558109    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:43.558205    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:43.572653    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:43.572730    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:43.585139    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:43.585211    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:43.596961    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:43.597030    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:43.607432    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:43.607502    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:43.618467    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:43.618545    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:43.629181    6165 logs.go:276] 0 containers: []
	W0906 12:31:43.629192    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:43.629251    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:43.639979    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:43.639996    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:43.640001    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:43.655411    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:43.655421    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:43.690119    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:43.690130    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:43.705515    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:43.705525    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:43.716841    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:43.716851    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:43.738248    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:43.738259    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:43.743471    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:43.743479    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:43.758094    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:43.758106    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:43.770249    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:43.770261    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:43.781905    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:43.781919    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:43.805934    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:43.805945    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:43.838928    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:43.838937    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:43.852866    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:43.852880    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:43.864858    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:43.864870    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:43.876465    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:43.876475    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:46.390496    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:51.392280    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:51.392404    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:51.404879    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:51.404944    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:51.415712    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:51.415782    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:51.426544    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:51.426609    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:51.437288    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:51.437358    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:51.455903    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:51.455975    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:51.467173    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:51.467237    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:51.476966    6165 logs.go:276] 0 containers: []
	W0906 12:31:51.476977    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:51.477034    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:51.487730    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:51.487748    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:51.487754    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:51.492054    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:51.492064    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:51.511250    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:51.511262    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:51.523288    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:51.523301    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:51.534882    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:51.534895    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:51.547405    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:51.547416    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:51.562875    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:51.562884    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:51.574471    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:51.574482    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:51.599465    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:51.599476    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:51.634269    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:51.634282    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:51.670078    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:51.670091    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:51.681747    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:51.681761    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:51.694087    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:51.694098    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:51.711465    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:51.711478    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:51.730361    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:51.730371    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:54.245534    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:59.247469    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:59.247629    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:59.258410    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:59.258472    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:59.269422    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:59.269493    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:59.280817    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:59.280885    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:59.295785    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:59.295847    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:59.307557    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:59.307623    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:59.318286    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:59.318355    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:59.329313    6165 logs.go:276] 0 containers: []
	W0906 12:31:59.329323    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:59.329379    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:59.343905    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:59.343921    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:59.343926    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:59.355621    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:59.355631    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:59.367377    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:59.367388    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:59.379377    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:59.379387    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:59.397347    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:59.397358    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:59.421828    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:59.421838    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:59.433158    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:59.433168    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:59.468456    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:59.468464    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:59.482927    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:59.482936    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:59.495277    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:59.495291    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:59.510212    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:59.510222    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:59.522316    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:59.522326    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:59.527058    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:59.527067    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:59.561030    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:59.561041    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:59.575750    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:59.575760    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:32:02.089295    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:07.091567    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:07.091699    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:07.106304    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:32:07.106376    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:07.120025    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:32:07.120091    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:07.131078    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:32:07.131150    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:07.144419    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:32:07.144478    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:07.155245    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:32:07.155310    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:07.165912    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:32:07.165986    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:07.177815    6165 logs.go:276] 0 containers: []
	W0906 12:32:07.177826    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:07.177891    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:07.189102    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:32:07.189118    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:32:07.189123    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:32:07.200346    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:32:07.200361    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:32:07.215768    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:32:07.215780    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:32:07.231115    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:32:07.231128    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:32:07.248753    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:07.248768    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:07.272765    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:32:07.272776    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:07.284879    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:07.284894    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:07.319930    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:32:07.319941    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:32:07.334560    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:32:07.334574    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:32:07.347070    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:07.347081    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:07.351453    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:32:07.351463    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:32:07.365884    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:32:07.365894    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:32:07.378146    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:32:07.378159    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:32:07.389878    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:07.389893    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:07.423716    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:32:07.423727    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:32:09.937341    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:14.939591    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:14.939710    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:14.952281    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:32:14.952351    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:14.963524    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:32:14.963589    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:14.974335    6165 logs.go:276] 4 containers: [ce344e93b0f6 b8d56638d69b c5f07fc47b7b c714dbf82d9d]
	I0906 12:32:14.974402    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:14.985314    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:32:14.985381    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:14.995775    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:32:14.995840    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:15.005809    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:32:15.005865    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:15.016760    6165 logs.go:276] 0 containers: []
	W0906 12:32:15.016770    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:15.016827    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:15.027234    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:32:15.027252    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:32:15.027257    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:32:15.044251    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:15.044261    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:15.068327    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:32:15.068335    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:15.079557    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:32:15.079567    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:32:15.093557    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:32:15.093568    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:32:15.108242    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:32:15.108253    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:32:15.122408    6165 logs.go:123] Gathering logs for coredns [ce344e93b0f6] ...
	I0906 12:32:15.122421    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce344e93b0f6"
	I0906 12:32:15.136519    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:15.136531    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:15.171182    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:15.171197    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:15.204788    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:32:15.204804    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:32:15.216914    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:32:15.216929    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:32:15.230303    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:32:15.230320    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:32:15.253067    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:32:15.253081    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:32:15.273695    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:15.273709    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:15.278870    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:32:15.278880    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:32:17.792280    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:22.794443    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:22.798893    6165 out.go:201] 
	W0906 12:32:22.801795    6165 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0906 12:32:22.801800    6165 out.go:270] * 
	* 
	W0906 12:32:22.802196    6165 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:32:22.809594    6165 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-549000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-09-06 12:32:22.909883 -0700 PDT m=+3827.532107959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-549000 -n running-upgrade-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-549000 -n running-upgrade-549000: exit status 2 (15.673452083s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-549000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-269000 sudo                                | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo                                | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo cat                            | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo cat                            | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo                                | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo                                | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo                                | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo cat                            | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo cat                            | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo                                | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo                                | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo                                | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo find                           | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-269000 sudo crio                           | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-269000                                     | cilium-269000             | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT | 06 Sep 24 12:22 PDT |
	| start   | -p kubernetes-upgrade-140000                         | kubernetes-upgrade-140000 | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-868000                             | offline-docker-868000     | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT | 06 Sep 24 12:22 PDT |
	| stop    | -p kubernetes-upgrade-140000                         | kubernetes-upgrade-140000 | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT | 06 Sep 24 12:22 PDT |
	| start   | -p kubernetes-upgrade-140000                         | kubernetes-upgrade-140000 | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-236000                            | minikube                  | jenkins | v1.26.0 | 06 Sep 24 12:22 PDT | 06 Sep 24 12:24 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-140000                         | kubernetes-upgrade-140000 | jenkins | v1.34.0 | 06 Sep 24 12:22 PDT | 06 Sep 24 12:22 PDT |
	| start   | -p running-upgrade-549000                            | minikube                  | jenkins | v1.26.0 | 06 Sep 24 12:22 PDT | 06 Sep 24 12:23 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| start   | -p running-upgrade-549000                            | running-upgrade-549000    | jenkins | v1.34.0 | 06 Sep 24 12:23 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-236000 stop                          | minikube                  | jenkins | v1.26.0 | 06 Sep 24 12:24 PDT | 06 Sep 24 12:24 PDT |
	| start   | -p stopped-upgrade-236000                            | stopped-upgrade-236000    | jenkins | v1.34.0 | 06 Sep 24 12:24 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 12:24:19
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:24:19.515683    6239 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:24:19.515801    6239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:24:19.515804    6239 out.go:358] Setting ErrFile to fd 2...
	I0906 12:24:19.515806    6239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:24:19.515948    6239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:24:19.517040    6239 out.go:352] Setting JSON to false
	I0906 12:24:19.534176    6239 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5029,"bootTime":1725645630,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:24:19.534248    6239 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:24:19.538698    6239 out.go:177] * [stopped-upgrade-236000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:24:19.545755    6239 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:24:19.545804    6239 notify.go:220] Checking for updates...
	I0906 12:24:19.551663    6239 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:24:19.557593    6239 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:24:19.560729    6239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:24:19.563707    6239 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:24:19.566696    6239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:24:19.569941    6239 config.go:182] Loaded profile config "stopped-upgrade-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0906 12:24:19.573663    6239 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 12:24:19.576623    6239 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:24:19.580674    6239 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:24:19.586653    6239 start.go:297] selected driver: qemu2
	I0906 12:24:19.586661    6239 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0906 12:24:19.586747    6239 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:24:19.589178    6239 cni.go:84] Creating CNI manager for ""
	I0906 12:24:19.589197    6239 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:24:19.589223    6239 start.go:340] cluster config:
	{Name:stopped-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0906 12:24:19.589272    6239 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:24:19.596673    6239 out.go:177] * Starting "stopped-upgrade-236000" primary control-plane node in "stopped-upgrade-236000" cluster
	I0906 12:24:19.600666    6239 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0906 12:24:19.600679    6239 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0906 12:24:19.600686    6239 cache.go:56] Caching tarball of preloaded images
	I0906 12:24:19.600733    6239 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:24:19.600738    6239 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0906 12:24:19.600796    6239 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/config.json ...
	I0906 12:24:19.601145    6239 start.go:360] acquireMachinesLock for stopped-upgrade-236000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:24:19.601179    6239 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "stopped-upgrade-236000"
	I0906 12:24:19.601189    6239 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:24:19.601193    6239 fix.go:54] fixHost starting: 
	I0906 12:24:19.601306    6239 fix.go:112] recreateIfNeeded on stopped-upgrade-236000: state=Stopped err=<nil>
	W0906 12:24:19.601315    6239 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:24:19.605668    6239 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-236000" ...
	I0906 12:24:18.712505    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:18.712531    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:19.613650    6239 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:24:19.613717    6239 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50256-:22,hostfwd=tcp::50257-:2376,hostname=stopped-upgrade-236000 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/disk.qcow2
	I0906 12:24:19.658234    6239 main.go:141] libmachine: STDOUT: 
	I0906 12:24:19.658269    6239 main.go:141] libmachine: STDERR: 
	I0906 12:24:19.658274    6239 main.go:141] libmachine: Waiting for VM to start (ssh -p 50256 docker@127.0.0.1)...
	I0906 12:24:23.712798    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:23.712862    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:28.713329    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:28.713365    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:33.714026    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:33.714066    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:38.910760    6239 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/config.json ...
	I0906 12:24:38.911624    6239 machine.go:93] provisionDockerMachine start ...
	I0906 12:24:38.911822    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:38.912434    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:38.912450    6239 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:24:38.991296    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:24:38.991324    6239 buildroot.go:166] provisioning hostname "stopped-upgrade-236000"
	I0906 12:24:38.991450    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:38.991665    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:38.991674    6239 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-236000 && echo "stopped-upgrade-236000" | sudo tee /etc/hostname
	I0906 12:24:39.065044    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-236000
	
	I0906 12:24:39.065104    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:39.065257    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:39.065267    6239 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-236000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-236000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-236000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:24:39.134095    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:24:39.134108    6239 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-2143/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-2143/.minikube}
	I0906 12:24:39.134118    6239 buildroot.go:174] setting up certificates
	I0906 12:24:39.134123    6239 provision.go:84] configureAuth start
	I0906 12:24:39.134133    6239 provision.go:143] copyHostCerts
	I0906 12:24:39.134217    6239 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem, removing ...
	I0906 12:24:39.134227    6239 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem
	I0906 12:24:39.134355    6239 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem (1082 bytes)
	I0906 12:24:39.134550    6239 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem, removing ...
	I0906 12:24:39.134555    6239 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem
	I0906 12:24:39.134610    6239 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem (1123 bytes)
	I0906 12:24:39.134728    6239 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem, removing ...
	I0906 12:24:39.134732    6239 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem
	I0906 12:24:39.134788    6239 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem (1675 bytes)
	I0906 12:24:39.134906    6239 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-236000 san=[127.0.0.1 localhost minikube stopped-upgrade-236000]
	I0906 12:24:39.263625    6239 provision.go:177] copyRemoteCerts
	I0906 12:24:39.263667    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:24:39.263675    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	I0906 12:24:39.297209    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 12:24:39.303976    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0906 12:24:39.310773    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:24:39.317979    6239 provision.go:87] duration metric: took 183.8525ms to configureAuth
	I0906 12:24:39.317988    6239 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:24:39.318082    6239 config.go:182] Loaded profile config "stopped-upgrade-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0906 12:24:39.318121    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:39.318201    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:39.318205    6239 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:24:39.379818    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:24:39.379829    6239 buildroot.go:70] root file system type: tmpfs
	I0906 12:24:39.379883    6239 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:24:39.379936    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:39.380059    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:39.380097    6239 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:24:39.446858    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:24:39.446912    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:39.447030    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:39.447046    6239 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:24:38.714723    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:38.714818    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:39.816920    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:24:39.816935    6239 machine.go:96] duration metric: took 905.304833ms to provisionDockerMachine
	I0906 12:24:39.816942    6239 start.go:293] postStartSetup for "stopped-upgrade-236000" (driver="qemu2")
	I0906 12:24:39.816950    6239 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:24:39.817004    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:24:39.817014    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	I0906 12:24:39.852685    6239 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:24:39.853966    6239 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 12:24:39.853975    6239 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-2143/.minikube/addons for local assets ...
	I0906 12:24:39.854066    6239 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-2143/.minikube/files for local assets ...
	I0906 12:24:39.854181    6239 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem -> 26722.pem in /etc/ssl/certs
	I0906 12:24:39.854307    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:24:39.857329    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem --> /etc/ssl/certs/26722.pem (1708 bytes)
	I0906 12:24:39.864022    6239 start.go:296] duration metric: took 47.073333ms for postStartSetup
	I0906 12:24:39.864043    6239 fix.go:56] duration metric: took 20.262996417s for fixHost
	I0906 12:24:39.864081    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:39.864188    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:39.864192    6239 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:24:39.923300    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725650679.447463379
	
	I0906 12:24:39.923308    6239 fix.go:216] guest clock: 1725650679.447463379
	I0906 12:24:39.923311    6239 fix.go:229] Guest: 2024-09-06 12:24:39.447463379 -0700 PDT Remote: 2024-09-06 12:24:39.864045 -0700 PDT m=+20.368479293 (delta=-416.581621ms)
	I0906 12:24:39.923323    6239 fix.go:200] guest clock delta is within tolerance: -416.581621ms
	I0906 12:24:39.923326    6239 start.go:83] releasing machines lock for "stopped-upgrade-236000", held for 20.322288792s
	I0906 12:24:39.923387    6239 ssh_runner.go:195] Run: cat /version.json
	I0906 12:24:39.923391    6239 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:24:39.923396    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	I0906 12:24:39.923407    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	W0906 12:24:39.923988    6239 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50256: connect: connection refused
	I0906 12:24:39.924010    6239 retry.go:31] will retry after 183.070329ms: dial tcp [::1]:50256: connect: connection refused
	W0906 12:24:40.142165    6239 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0906 12:24:40.142232    6239 ssh_runner.go:195] Run: systemctl --version
	I0906 12:24:40.144373    6239 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:24:40.146213    6239 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:24:40.146241    6239 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0906 12:24:40.149588    6239 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0906 12:24:40.154781    6239 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:24:40.154793    6239 start.go:495] detecting cgroup driver to use...
	I0906 12:24:40.154864    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:24:40.162025    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0906 12:24:40.165379    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:24:40.168603    6239 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:24:40.168635    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:24:40.171731    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:24:40.174629    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:24:40.178085    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:24:40.181304    6239 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:24:40.184539    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:24:40.187372    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:24:40.190424    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:24:40.193662    6239 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:24:40.196625    6239 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:24:40.199162    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:40.278071    6239 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:24:40.284381    6239 start.go:495] detecting cgroup driver to use...
	I0906 12:24:40.284461    6239 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:24:40.290189    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:24:40.295887    6239 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:24:40.305643    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:24:40.310124    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:24:40.314954    6239 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:24:40.345846    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:24:40.350707    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:24:40.356070    6239 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:24:40.357456    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:24:40.360258    6239 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0906 12:24:40.365401    6239 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:24:40.447590    6239 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:24:40.518115    6239 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:24:40.518177    6239 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:24:40.523295    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:40.601195    6239 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:24:41.759830    6239 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158622709s)
	I0906 12:24:41.759908    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:24:41.764543    6239 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:24:41.772117    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:24:41.776466    6239 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:24:41.853261    6239 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:24:41.925814    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:42.006856    6239 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:24:42.012813    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:24:42.017448    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:42.095006    6239 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:24:42.133651    6239 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:24:42.133726    6239 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:24:42.137044    6239 start.go:563] Will wait 60s for crictl version
	I0906 12:24:42.137091    6239 ssh_runner.go:195] Run: which crictl
	I0906 12:24:42.138321    6239 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:24:42.153080    6239 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0906 12:24:42.153158    6239 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:24:42.169382    6239 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:24:42.195275    6239 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0906 12:24:42.195342    6239 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0906 12:24:42.196521    6239 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:24:42.199879    6239 kubeadm.go:883] updating cluster {Name:stopped-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-236000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0906 12:24:42.199921    6239 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0906 12:24:42.199962    6239 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:24:42.210242    6239 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 12:24:42.210250    6239 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0906 12:24:42.210298    6239 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:24:42.213827    6239 ssh_runner.go:195] Run: which lz4
	I0906 12:24:42.215297    6239 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 12:24:42.216524    6239 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 12:24:42.216535    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0906 12:24:43.162451    6239 docker.go:649] duration metric: took 947.18675ms to copy over tarball
	I0906 12:24:43.162506    6239 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 12:24:44.324756    6239 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162245375s)
	I0906 12:24:44.324773    6239 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 12:24:44.340054    6239 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:24:44.342919    6239 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0906 12:24:44.348122    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:44.434175    6239 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:24:43.715794    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:43.715823    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:45.933141    6239 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.498960292s)
	I0906 12:24:45.933224    6239 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:24:45.945273    6239 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 12:24:45.945282    6239 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0906 12:24:45.945288    6239 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 12:24:45.949081    6239 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:45.950705    6239 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:45.952685    6239 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:45.952825    6239 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:45.953457    6239 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:45.953732    6239 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:45.954717    6239 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:45.956317    6239 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:45.956427    6239 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:45.958289    6239 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0906 12:24:45.958360    6239 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:45.958381    6239 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:45.958881    6239 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:45.959427    6239 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:45.960453    6239 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0906 12:24:45.961031    6239 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:46.342345    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:46.356147    6239 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0906 12:24:46.356168    6239 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:46.356223    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:46.366352    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0906 12:24:46.381539    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:46.386507    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:46.387795    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:46.392486    6239 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0906 12:24:46.392512    6239 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:46.392564    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:46.400884    6239 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0906 12:24:46.400905    6239 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:46.400955    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:46.406689    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:46.407134    6239 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0906 12:24:46.407151    6239 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:46.407177    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:46.412824    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0906 12:24:46.419256    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0906 12:24:46.422512    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0906 12:24:46.429878    6239 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0906 12:24:46.429902    6239 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:46.429951    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0906 12:24:46.429955    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0906 12:24:46.441346    6239 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0906 12:24:46.441477    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:46.443481    6239 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0906 12:24:46.443499    6239 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0906 12:24:46.443527    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0906 12:24:46.448098    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0906 12:24:46.455857    6239 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0906 12:24:46.455883    6239 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:46.455935    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:46.456891    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0906 12:24:46.456997    6239 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0906 12:24:46.466537    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0906 12:24:46.466634    6239 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0906 12:24:46.466645    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0906 12:24:46.466660    6239 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0906 12:24:46.469054    6239 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0906 12:24:46.469069    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0906 12:24:46.482220    6239 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0906 12:24:46.482240    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0906 12:24:46.523314    6239 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0906 12:24:46.529677    6239 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0906 12:24:46.529690    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0906 12:24:46.568707    6239 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0906 12:24:46.725504    6239 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0906 12:24:46.725693    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:46.745408    6239 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0906 12:24:46.745446    6239 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:46.745513    6239 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:46.760911    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 12:24:46.761028    6239 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0906 12:24:46.762474    6239 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0906 12:24:46.762484    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0906 12:24:46.790529    6239 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0906 12:24:46.790544    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0906 12:24:47.021794    6239 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0906 12:24:47.021835    6239 cache_images.go:92] duration metric: took 1.076548917s to LoadCachedImages
	W0906 12:24:47.021872    6239 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0906 12:24:47.021878    6239 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0906 12:24:47.021936    6239 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-236000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:24:47.021995    6239 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:24:47.035540    6239 cni.go:84] Creating CNI manager for ""
	I0906 12:24:47.035554    6239 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:24:47.035563    6239 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:24:47.035572    6239 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-236000 NodeName:stopped-upgrade-236000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:24:47.035645    6239 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-236000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:24:47.035709    6239 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0906 12:24:47.038542    6239 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:24:47.038576    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 12:24:47.041533    6239 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0906 12:24:47.046684    6239 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:24:47.051901    6239 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0906 12:24:47.056971    6239 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0906 12:24:47.058239    6239 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:24:47.062218    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:47.141391    6239 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:24:47.147170    6239 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000 for IP: 10.0.2.15
	I0906 12:24:47.147178    6239 certs.go:194] generating shared ca certs ...
	I0906 12:24:47.147192    6239 certs.go:226] acquiring lock for ca certs: {Name:mkeb2acf337d35e5b807329b963b0c0723ad2fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:47.147346    6239 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key
	I0906 12:24:47.147396    6239 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key
	I0906 12:24:47.147404    6239 certs.go:256] generating profile certs ...
	I0906 12:24:47.147479    6239 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/client.key
	I0906 12:24:47.147498    6239 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key.74969ff6
	I0906 12:24:47.147512    6239 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt.74969ff6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0906 12:24:47.236019    6239 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt.74969ff6 ...
	I0906 12:24:47.236037    6239 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt.74969ff6: {Name:mke61c1e49c05f6676b28fae907efded9d9fb0e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:47.237384    6239 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key.74969ff6 ...
	I0906 12:24:47.237390    6239 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key.74969ff6: {Name:mk6e11fa94f9059d5bb968b331725636129e1469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:47.237551    6239 certs.go:381] copying /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt.74969ff6 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt
	I0906 12:24:47.237708    6239 certs.go:385] copying /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key.74969ff6 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key
	I0906 12:24:47.237879    6239 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/proxy-client.key
	I0906 12:24:47.238019    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672.pem (1338 bytes)
	W0906 12:24:47.238049    6239 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672_empty.pem, impossibly tiny 0 bytes
	I0906 12:24:47.238055    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:24:47.238075    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem (1082 bytes)
	I0906 12:24:47.238098    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:24:47.238126    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem (1675 bytes)
	I0906 12:24:47.238167    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem (1708 bytes)
	I0906 12:24:47.238499    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:24:47.245755    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 12:24:47.252445    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:24:47.259313    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 12:24:47.266312    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0906 12:24:47.273874    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0906 12:24:47.281303    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:24:47.288141    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:24:47.294882    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem --> /usr/share/ca-certificates/26722.pem (1708 bytes)
	I0906 12:24:47.302154    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:24:47.309499    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672.pem --> /usr/share/ca-certificates/2672.pem (1338 bytes)
	I0906 12:24:47.316959    6239 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:24:47.322126    6239 ssh_runner.go:195] Run: openssl version
	I0906 12:24:47.324072    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26722.pem && ln -fs /usr/share/ca-certificates/26722.pem /etc/ssl/certs/26722.pem"
	I0906 12:24:47.326999    6239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26722.pem
	I0906 12:24:47.328416    6239 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:44 /usr/share/ca-certificates/26722.pem
	I0906 12:24:47.328435    6239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26722.pem
	I0906 12:24:47.330119    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26722.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:24:47.333570    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:24:47.336840    6239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:24:47.338285    6239 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:24:47.338301    6239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:24:47.340062    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:24:47.342916    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2672.pem && ln -fs /usr/share/ca-certificates/2672.pem /etc/ssl/certs/2672.pem"
	I0906 12:24:47.346384    6239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2672.pem
	I0906 12:24:47.347773    6239 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:44 /usr/share/ca-certificates/2672.pem
	I0906 12:24:47.347796    6239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2672.pem
	I0906 12:24:47.349430    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2672.pem /etc/ssl/certs/51391683.0"
	I0906 12:24:47.352479    6239 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:24:47.353985    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:24:47.356035    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:24:47.357898    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:24:47.359879    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:24:47.361635    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:24:47.363733    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:24:47.365576    6239 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-236000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0906 12:24:47.365650    6239 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:24:47.377455    6239 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:24:47.380696    6239 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:24:47.380702    6239 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:24:47.380727    6239 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:24:47.383687    6239 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:24:47.383967    6239 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-236000" does not appear in /Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:24:47.384066    6239 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-2143/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-236000" cluster setting kubeconfig missing "stopped-upgrade-236000" context setting]
	I0906 12:24:47.384280    6239 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/kubeconfig: {Name:mkb103f2b581179fd959f22a1dc4c9c6720f9366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:47.384946    6239 kapi.go:59] client config for stopped-upgrade-236000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10286bf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:24:47.385267    6239 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:24:47.387932    6239 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-236000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0906 12:24:47.387937    6239 kubeadm.go:1160] stopping kube-system containers ...
	I0906 12:24:47.387974    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:24:47.400463    6239 docker.go:483] Stopping containers: [6c0684138801 b31953704fbe d586e13d97c8 c859fcd79335 f1e7479bac8f 281e80785bbc 844d4edf7d83 581e8a4e86d3]
	I0906 12:24:47.400524    6239 ssh_runner.go:195] Run: docker stop 6c0684138801 b31953704fbe d586e13d97c8 c859fcd79335 f1e7479bac8f 281e80785bbc 844d4edf7d83 581e8a4e86d3
	I0906 12:24:47.412103    6239 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 12:24:47.417546    6239 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:24:47.420283    6239 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:24:47.420288    6239 kubeadm.go:157] found existing configuration files:
	
	I0906 12:24:47.420317    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/admin.conf
	I0906 12:24:47.422579    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:24:47.422597    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 12:24:47.425590    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/kubelet.conf
	I0906 12:24:47.428301    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:24:47.428323    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 12:24:47.430977    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/controller-manager.conf
	I0906 12:24:47.433887    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:24:47.433916    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 12:24:47.436766    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/scheduler.conf
	I0906 12:24:47.439188    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:24:47.439211    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 12:24:47.442256    6239 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:24:47.445467    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:47.469333    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:47.780018    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:47.913137    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:47.946001    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:47.978973    6239 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:24:47.979068    6239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:24:48.481161    6239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:24:48.981091    6239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:24:48.985146    6239 api_server.go:72] duration metric: took 1.00618275s to wait for apiserver process to appear ...
	I0906 12:24:48.985155    6239 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:24:48.985164    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:48.716851    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:48.716872    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:53.987289    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:53.987333    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:53.718196    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:53.718223    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:58.987644    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:58.987684    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:58.719880    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:58.719898    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:03.988463    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:03.988517    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:03.721988    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:03.722031    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:08.989225    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:08.989266    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:08.724338    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:08.724360    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:13.990041    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:13.990058    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:13.726117    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:13.726301    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:13.744209    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:13.744287    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:13.755067    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:13.755149    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:13.765872    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:13.765943    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:13.776305    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:13.776371    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:13.786246    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:13.786305    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:13.796693    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:13.796768    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:13.807000    6165 logs.go:276] 0 containers: []
	W0906 12:25:13.807012    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:13.807073    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:13.817400    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:13.817422    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:13.817427    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:13.831347    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:13.831356    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:13.844014    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:13.844024    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:13.870725    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:13.870732    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:13.882214    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:13.882225    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:13.949411    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:13.949423    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:13.964130    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:13.964143    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:13.977048    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:13.977062    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:13.988384    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:13.988395    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:14.001154    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:14.001168    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:14.045282    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:14.045295    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:14.058668    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:14.058679    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:14.070973    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:14.070984    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:14.085078    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:14.085089    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:14.100386    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:14.100399    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:14.117677    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:14.117686    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:16.624521    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:18.990991    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:18.991038    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:21.626770    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:21.626923    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:21.638213    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:21.638289    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:21.648880    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:21.648956    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:21.659242    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:21.659306    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:21.670243    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:21.670314    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:21.680910    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:21.680973    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:21.691339    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:21.691401    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:21.701779    6165 logs.go:276] 0 containers: []
	W0906 12:25:21.701788    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:21.701837    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:21.712162    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:21.712186    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:21.712192    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:21.725973    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:21.725983    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:21.739949    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:21.739961    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:21.751286    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:21.751297    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:21.763110    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:21.763121    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:21.780676    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:21.780686    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:21.792118    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:21.792130    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:21.834239    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:21.834246    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:21.838977    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:21.838983    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:21.853317    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:21.853328    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:21.866108    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:21.866123    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:21.877444    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:21.877455    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:21.888943    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:21.888954    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:21.925207    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:21.925222    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:21.937157    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:21.937168    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:21.948659    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:21.948672    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:23.992415    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:23.992474    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:24.476773    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:28.994282    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:28.994342    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:29.479063    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:29.479235    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:29.502953    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:29.503069    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:29.518907    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:29.518989    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:29.531855    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:29.531935    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:29.543117    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:29.543185    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:29.553842    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:29.553906    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:29.564837    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:29.564903    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:29.575110    6165 logs.go:276] 0 containers: []
	W0906 12:25:29.575123    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:29.575176    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:29.586021    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:29.586039    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:29.586045    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:29.597517    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:29.597529    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:29.609216    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:29.609227    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:29.625037    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:29.625049    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:29.635957    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:29.635969    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:29.653720    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:29.653733    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:29.667388    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:29.667402    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:29.681654    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:29.681664    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:29.692795    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:29.692806    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:29.719476    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:29.719484    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:29.731616    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:29.731628    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:29.737228    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:29.737235    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:29.771039    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:29.771051    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:29.786728    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:29.786740    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:29.798526    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:29.798540    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:29.839845    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:29.839854    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:33.996836    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:33.996886    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:32.354416    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:38.997945    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:38.997993    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:37.356645    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:37.356898    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:37.382070    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:37.382160    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:37.399385    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:37.399465    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:37.413613    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:37.413679    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:37.429365    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:37.429437    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:37.440402    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:37.440470    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:37.453783    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:37.453855    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:37.476121    6165 logs.go:276] 0 containers: []
	W0906 12:25:37.476134    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:37.476193    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:37.487332    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:37.487352    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:37.487358    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:37.492489    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:37.492503    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:37.530979    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:37.530992    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:37.545482    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:37.545494    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:37.560286    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:37.560297    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:37.571803    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:37.571815    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:37.584102    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:37.584115    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:37.624609    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:37.624619    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:37.642138    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:37.642149    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:37.666847    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:37.666854    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:37.679430    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:37.679443    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:37.691271    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:37.691283    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:37.703388    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:37.703401    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:37.719244    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:37.719256    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:37.730762    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:37.730775    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:37.748275    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:37.748286    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:40.274717    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:44.000279    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:44.000305    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:45.277206    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:45.277626    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:45.318297    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:45.318439    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:45.338869    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:45.338962    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:45.353945    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:45.354023    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:45.366904    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:45.366980    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:45.378160    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:45.378228    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:45.388194    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:45.388260    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:45.398082    6165 logs.go:276] 0 containers: []
	W0906 12:25:45.398093    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:45.398155    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:45.408869    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:45.408886    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:45.408891    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:45.450228    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:45.450237    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:45.454752    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:45.454761    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:45.469679    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:45.469690    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:45.482356    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:45.482373    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:45.497516    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:45.497527    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:45.508944    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:45.508955    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:45.520706    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:45.520716    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:45.538676    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:45.538689    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:45.563898    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:45.563909    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:45.598443    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:45.598456    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:45.612841    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:45.612853    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:45.624591    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:45.624603    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:45.636812    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:45.636823    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:45.650474    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:45.650485    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:45.662741    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:45.662751    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:49.002517    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:49.002781    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:49.032018    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:25:49.032143    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:49.050150    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:25:49.050243    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:49.064357    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:25:49.064432    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:49.076397    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:25:49.076459    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:49.086631    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:25:49.086697    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:49.097407    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:25:49.097468    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:49.107414    6239 logs.go:276] 0 containers: []
	W0906 12:25:49.107424    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:49.107495    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:49.117689    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:25:49.117707    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:49.117713    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:49.203344    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:25:49.203358    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:25:49.215595    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:25:49.215607    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:25:49.236725    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:49.236739    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:49.262898    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:49.262907    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:49.301311    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:49.301321    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:49.305496    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:25:49.305503    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:25:49.319103    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:25:49.319113    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:25:49.330598    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:25:49.330611    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:25:49.343898    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:25:49.343911    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:49.356441    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:25:49.356455    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:25:49.370961    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:25:49.370971    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:25:49.412628    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:25:49.412639    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:25:49.428101    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:25:49.428112    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:25:49.441469    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:25:49.441481    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:25:49.456856    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:25:49.456867    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:25:49.476848    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:25:49.476859    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:25:48.176505    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:51.989540    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:53.178759    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:53.178939    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:53.192651    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:25:53.192721    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:53.207632    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:25:53.207697    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:53.218769    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:25:53.218828    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:53.229436    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:25:53.229505    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:53.239490    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:25:53.239550    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:53.249723    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:25:53.249791    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:53.259944    6165 logs.go:276] 0 containers: []
	W0906 12:25:53.259957    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:53.260008    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:53.270054    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:25:53.270072    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:53.270082    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:53.305510    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:25:53.305522    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:25:53.317809    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:25:53.317820    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:53.329072    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:25:53.329082    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:25:53.345810    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:53.345822    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:53.386461    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:25:53.386469    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:25:53.403856    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:25:53.403869    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:25:53.415381    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:25:53.415394    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:25:53.426276    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:53.426288    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:53.451042    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:25:53.451051    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:53.462612    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:25:53.462624    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:25:53.473927    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:53.473939    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:53.479266    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:25:53.479273    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:25:53.493591    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:25:53.493601    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:25:53.512656    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:25:53.512671    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:25:53.524616    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:25:53.524630    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:25:56.041381    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:56.990516    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:56.990685    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:57.015806    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:25:57.015924    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:57.032655    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:25:57.032739    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:57.049146    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:25:57.049217    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:57.060505    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:25:57.060581    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:57.071581    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:25:57.071642    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:57.082168    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:25:57.082225    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:57.092767    6239 logs.go:276] 0 containers: []
	W0906 12:25:57.092793    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:57.092856    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:57.103793    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:25:57.103811    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:25:57.103817    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:25:57.115326    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:25:57.115336    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:25:57.130654    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:57.130663    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:57.168894    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:25:57.168904    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:25:57.182443    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:25:57.182454    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:25:57.220527    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:25:57.220553    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:25:57.235066    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:57.235077    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:57.239219    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:25:57.239227    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:25:57.256483    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:25:57.256495    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:25:57.268107    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:25:57.268120    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:57.279784    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:57.279799    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:57.318318    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:25:57.318331    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:25:57.332760    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:25:57.332777    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:25:57.351388    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:57.351398    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:57.375837    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:25:57.375848    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:25:57.388250    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:25:57.388265    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:25:57.400940    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:25:57.400954    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:01.043733    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:01.043930    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:01.060321    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:01.060405    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:01.072642    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:01.072706    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:01.083270    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:01.083335    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:01.100520    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:01.100595    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:01.111198    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:01.111271    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:01.121404    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:01.121470    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:01.131777    6165 logs.go:276] 0 containers: []
	W0906 12:26:01.131790    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:01.131843    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:01.142790    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:01.142808    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:01.142813    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:01.189301    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:01.189312    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:01.201728    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:01.201741    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:01.217436    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:01.217448    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:01.233124    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:01.233138    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:01.251432    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:01.251443    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:01.270848    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:01.270860    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:01.288201    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:01.288211    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:01.303089    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:01.303100    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:01.329332    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:01.329343    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:01.351781    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:01.351792    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:01.363352    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:01.363363    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:01.367891    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:01.367898    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:01.403862    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:01.403874    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:01.417981    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:01.417990    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:01.429324    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:01.429334    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:25:59.913747    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:03.943735    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:04.916293    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:04.916628    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:04.952738    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:04.952878    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:04.973355    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:04.973448    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:04.993204    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:04.993276    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:05.005450    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:05.005525    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:05.016184    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:05.016241    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:05.026701    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:05.026777    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:05.037034    6239 logs.go:276] 0 containers: []
	W0906 12:26:05.037046    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:05.037103    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:05.048342    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:05.048362    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:05.048368    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:05.052784    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:05.052795    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:05.066067    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:05.066081    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:05.084076    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:05.084086    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:05.096268    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:05.096279    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:05.134672    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:05.134683    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:05.148303    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:05.148313    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:05.159730    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:05.159742    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:05.171382    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:05.171393    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:05.208897    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:05.208905    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:05.244523    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:05.244534    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:05.256651    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:05.256668    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:05.268276    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:05.268290    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:05.292037    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:05.292047    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:05.311355    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:05.311367    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:05.325655    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:05.325668    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:05.338412    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:05.338423    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:07.855331    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:08.946210    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:08.946560    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:08.977257    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:08.977381    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:08.996406    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:08.996496    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:09.010903    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:09.010982    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:09.023196    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:09.023269    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:09.034112    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:09.034172    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:09.044770    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:09.044837    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:09.057096    6165 logs.go:276] 0 containers: []
	W0906 12:26:09.057107    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:09.057167    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:09.067593    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:09.067611    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:09.067616    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:09.081575    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:09.081586    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:09.100262    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:09.100277    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:09.113928    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:09.113943    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:09.125181    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:09.125197    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:09.150938    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:09.150945    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:09.186431    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:09.186444    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:09.201478    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:09.201491    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:09.213138    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:09.213149    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:09.224906    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:09.224917    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:09.264841    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:09.264850    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:09.278511    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:09.278521    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:09.290765    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:09.290776    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:09.303070    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:09.303081    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:09.307680    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:09.307689    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:09.319918    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:09.319928    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:11.832687    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:12.857715    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:12.857888    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:12.872199    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:12.872277    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:12.884780    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:12.884846    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:12.895336    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:12.895404    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:12.906068    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:12.906141    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:12.916886    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:12.916954    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:12.927108    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:12.927178    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:12.938079    6239 logs.go:276] 0 containers: []
	W0906 12:26:12.938092    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:12.938152    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:12.948653    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:12.948672    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:12.948679    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:12.959787    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:12.959799    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:12.974510    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:12.974523    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:12.993545    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:12.993556    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:13.006076    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:13.006087    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:13.029372    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:13.029380    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:13.043542    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:13.043555    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:13.059168    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:13.059178    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:13.073336    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:13.073346    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:13.094275    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:13.094287    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:13.098643    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:13.098651    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:13.115387    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:13.115400    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:13.153184    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:13.153195    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:13.166675    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:13.166687    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:13.179282    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:13.179295    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:13.190949    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:13.190962    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:13.228223    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:13.228236    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:16.834549    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:16.834738    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:16.857809    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:16.857925    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:16.873373    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:16.873457    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:16.885611    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:16.885679    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:16.896527    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:16.896599    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:16.906575    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:16.906647    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:16.917363    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:16.917428    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:16.927679    6165 logs.go:276] 0 containers: []
	W0906 12:26:16.927693    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:16.927753    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:16.943178    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:16.943195    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:16.943203    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:16.955228    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:16.955239    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:16.959694    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:16.959701    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:16.972031    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:16.972044    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:16.986582    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:16.986593    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:17.012816    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:17.012823    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:17.036243    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:17.036252    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:17.049254    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:17.049266    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:17.064731    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:17.064742    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:17.081713    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:17.081723    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:17.092382    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:17.092397    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:15.764439    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:17.103849    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:17.103860    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:17.120842    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:17.120853    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:17.162191    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:17.162200    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:17.196890    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:17.196902    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:17.211431    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:17.211445    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:19.724881    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:20.766783    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:20.766903    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:20.779359    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:20.779438    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:20.789938    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:20.790001    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:20.800264    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:20.800330    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:20.810603    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:20.810697    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:20.821444    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:20.821507    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:20.832070    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:20.832129    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:20.842237    6239 logs.go:276] 0 containers: []
	W0906 12:26:20.842248    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:20.842296    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:20.852518    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:20.852532    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:20.852537    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:20.869557    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:20.869570    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:20.881084    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:20.881099    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:20.906705    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:20.906713    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:20.945435    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:20.945446    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:20.959458    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:20.959469    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:20.970762    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:20.970776    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:20.984485    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:20.984499    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:20.999271    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:20.999281    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:21.003317    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:21.003326    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:21.016795    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:21.016810    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:21.056281    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:21.056292    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:21.068168    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:21.068178    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:21.080904    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:21.080917    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:21.092104    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:21.092115    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:21.131827    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:21.131838    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:21.146800    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:21.146815    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:23.660665    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:24.727213    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:24.727389    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:24.758183    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:24.758283    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:24.774286    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:24.774358    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:24.792819    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:24.792892    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:24.803565    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:24.803634    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:24.814175    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:24.814242    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:24.826078    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:24.826142    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:24.835789    6165 logs.go:276] 0 containers: []
	W0906 12:26:24.835806    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:24.835863    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:24.850391    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:24.850409    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:24.850415    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:24.893505    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:24.893514    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:24.917951    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:24.917958    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:24.930172    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:24.930183    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:24.951068    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:24.951082    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:24.962971    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:24.962983    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:24.975427    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:24.975438    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:24.986833    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:24.986846    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:24.998180    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:24.998191    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:25.034474    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:25.034485    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:25.048026    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:25.048040    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:25.061133    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:25.061145    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:25.078062    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:25.078073    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:25.083175    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:25.083185    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:25.099990    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:25.100000    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:25.114934    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:25.114944    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:28.663282    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:28.663465    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:28.686026    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:28.686141    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:28.700479    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:28.700556    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:28.711794    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:28.711870    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:28.724007    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:28.724071    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:28.734643    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:28.734706    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:28.745292    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:28.745349    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:28.756876    6239 logs.go:276] 0 containers: []
	W0906 12:26:28.756889    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:28.756944    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:28.768545    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:28.768563    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:28.768569    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:28.793360    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:28.793368    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:28.807698    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:28.807714    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:28.821671    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:28.821682    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:28.833324    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:28.833334    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:28.846537    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:28.846548    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:28.857884    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:28.857896    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:28.870022    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:28.870035    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:28.881341    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:28.881352    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:28.896398    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:28.896411    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:28.908239    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:28.908250    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:28.946572    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:28.946588    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:28.961096    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:28.961107    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:28.974467    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:28.974480    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:28.992265    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:28.992281    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:29.029657    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:29.029667    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:29.034218    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:29.034226    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:27.629136    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:31.579453    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:32.630117    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:32.630333    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:32.650538    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:32.650632    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:32.665250    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:32.665326    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:32.677358    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:32.677428    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:32.688323    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:32.688391    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:32.699401    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:32.699472    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:32.710999    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:32.711068    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:32.721351    6165 logs.go:276] 0 containers: []
	W0906 12:26:32.721362    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:32.721412    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:32.731996    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:32.732014    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:32.732019    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:32.756819    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:32.756832    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:32.761531    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:32.761538    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:32.776118    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:32.776129    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:32.790044    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:32.790054    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:32.804555    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:32.804568    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:32.818986    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:32.818997    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:32.831204    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:32.831218    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:32.865936    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:32.865945    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:32.879252    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:32.879264    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:32.891018    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:32.891029    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:32.911389    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:32.911399    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:32.922984    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:32.922994    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:32.934583    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:32.934595    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:32.977095    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:32.977107    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:32.989824    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:32.989837    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:35.503448    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:36.580498    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:36.580735    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:36.606696    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:36.606826    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:36.624395    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:36.624469    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:36.637737    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:36.637799    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:36.649565    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:36.649639    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:36.660302    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:36.660374    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:36.670685    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:36.670750    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:36.680592    6239 logs.go:276] 0 containers: []
	W0906 12:26:36.680606    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:36.680665    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:36.691053    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:36.691070    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:36.691076    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:36.704974    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:36.704985    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:36.716467    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:36.716476    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:36.740446    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:36.740453    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:36.744429    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:36.744435    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:36.758769    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:36.758781    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:36.777444    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:36.777454    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:36.795314    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:36.795325    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:36.835113    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:36.835128    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:36.847604    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:36.847618    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:36.878214    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:36.878229    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:36.892851    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:36.892860    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:36.930807    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:36.930820    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:36.944410    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:36.944421    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:36.957605    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:36.957618    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:36.976003    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:36.976017    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:36.988170    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:36.988185    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:40.505847    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:40.506041    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:40.524432    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:40.524521    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:40.537791    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:40.537868    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:40.548797    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:40.548865    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:40.559385    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:40.559452    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:40.570376    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:40.570439    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:40.581021    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:40.581091    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:40.591366    6165 logs.go:276] 0 containers: []
	W0906 12:26:40.591378    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:40.591436    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:40.601731    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:40.601749    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:40.601755    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:40.614034    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:40.614048    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:40.625545    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:40.625557    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:40.637411    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:40.637423    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:40.654492    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:40.654504    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:40.669083    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:40.669096    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:40.680843    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:40.680856    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:40.695330    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:40.695341    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:40.735987    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:40.735997    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:40.749952    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:40.749962    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:40.764510    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:40.764521    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:40.788168    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:40.788174    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:40.792229    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:40.792235    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:40.826348    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:40.826361    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:40.840563    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:40.840576    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:40.856149    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:40.856164    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:39.526091    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:43.375753    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:44.528445    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:44.528641    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:44.552742    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:44.552861    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:44.569976    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:44.570056    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:44.582461    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:44.582532    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:44.593630    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:44.593701    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:44.606180    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:44.606246    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:44.617099    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:44.617157    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:44.627765    6239 logs.go:276] 0 containers: []
	W0906 12:26:44.627780    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:44.627839    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:44.638399    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:44.638424    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:44.638429    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:44.652539    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:44.652550    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:44.667473    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:44.667483    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:44.680150    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:44.680162    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:44.720363    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:44.720375    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:44.756039    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:44.756052    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:44.768385    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:44.768396    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:44.781376    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:44.781388    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:44.793426    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:44.793439    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:44.808352    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:44.808362    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:44.826090    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:44.826101    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:44.830636    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:44.830646    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:44.848070    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:44.848082    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:44.886354    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:44.886372    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:44.902444    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:44.902458    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:44.914436    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:44.914448    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:44.937725    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:44.937732    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:47.451810    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:48.378367    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:48.378546    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:48.397254    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:48.397343    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:48.411237    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:48.411318    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:48.422306    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:48.422371    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:48.432592    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:48.432665    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:48.443118    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:48.443189    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:48.453473    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:48.453543    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:48.463473    6165 logs.go:276] 0 containers: []
	W0906 12:26:48.463484    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:48.463544    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:48.475654    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:48.475672    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:48.475678    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:48.489977    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:48.489987    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:48.501544    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:48.501555    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:48.512492    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:48.512506    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:48.523608    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:48.523619    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:48.565861    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:48.565878    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:48.570948    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:48.570954    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:48.584613    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:48.584624    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:48.598475    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:48.598488    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:48.611329    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:48.611342    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:48.645988    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:48.646002    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:48.658000    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:48.658014    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:48.676154    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:48.676166    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:48.701631    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:48.701641    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:48.715402    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:48.715411    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:48.726839    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:48.726850    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:51.240482    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:52.454167    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:52.454353    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:52.470935    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:52.471022    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:52.485021    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:52.485093    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:52.496353    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:52.496419    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:52.507019    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:52.507086    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:52.517661    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:52.517728    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:52.531338    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:52.531403    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:52.541585    6239 logs.go:276] 0 containers: []
	W0906 12:26:52.541601    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:52.541657    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:52.552320    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:52.552339    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:52.552345    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:52.586923    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:52.586935    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:52.601197    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:52.601211    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:52.615083    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:52.615095    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:52.626041    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:52.626052    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:52.630429    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:52.630434    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:52.669893    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:52.669906    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:52.684402    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:52.684415    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:52.695570    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:52.695581    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:52.707776    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:52.707786    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:52.723301    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:52.723311    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:52.736847    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:52.736857    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:52.777059    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:52.777071    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:52.791348    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:52.791359    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:52.809800    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:52.809812    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:52.820535    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:52.820546    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:52.843869    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:52.843877    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:56.242767    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:56.243100    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:56.273702    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:26:56.273828    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:56.293243    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:26:56.293349    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:56.307883    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:26:56.307952    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:56.320668    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:26:56.320742    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:56.331300    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:26:56.331358    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:56.342223    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:26:56.342289    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:56.352860    6165 logs.go:276] 0 containers: []
	W0906 12:26:56.352871    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:56.352931    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:56.363461    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:26:56.363481    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:26:56.363487    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:26:56.374936    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:26:56.374946    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:26:56.387594    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:26:56.387605    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:26:56.399935    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:26:56.399949    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:26:56.416677    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:26:56.416688    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:26:56.427941    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:56.427957    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:56.452983    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:56.452993    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:56.457177    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:56.457183    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:56.492716    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:26:56.492728    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:26:56.504691    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:26:56.504706    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:56.516341    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:26:56.516351    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:26:56.530359    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:26:56.530371    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:26:56.543208    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:26:56.543221    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:26:56.557452    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:56.557463    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:56.599003    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:26:56.599016    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:26:56.613108    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:26:56.613121    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:26:55.357311    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:59.126502    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:00.359925    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:00.360142    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:00.387324    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:00.387442    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:00.411442    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:00.411508    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:00.424692    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:00.424754    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:00.436024    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:00.436090    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:00.446735    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:00.446799    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:00.460560    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:00.460628    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:00.471235    6239 logs.go:276] 0 containers: []
	W0906 12:27:00.471247    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:00.471308    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:00.481963    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:00.481980    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:00.481986    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:00.493117    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:00.493127    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:00.518444    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:00.518452    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:00.534016    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:00.534030    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:00.572314    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:00.572325    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:00.587054    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:00.587065    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:00.604337    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:00.604348    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:00.615901    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:00.615916    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:00.627826    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:00.627837    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:00.640719    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:00.640730    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:00.645623    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:00.645630    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:00.679924    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:00.679934    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:00.699683    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:00.699694    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:00.717435    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:00.717446    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:00.756876    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:00.756887    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:00.771203    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:00.771213    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:00.783166    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:00.783176    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:03.297723    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:04.127624    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:04.127732    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:04.139462    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:04.139540    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:04.151000    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:04.151075    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:04.161529    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:04.161595    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:04.172094    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:04.172163    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:04.183416    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:04.183485    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:04.194321    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:04.194386    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:04.204785    6165 logs.go:276] 0 containers: []
	W0906 12:27:04.204797    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:04.204851    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:04.214993    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:04.215025    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:04.215032    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:04.228561    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:04.228572    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:04.247709    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:04.247720    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:04.261724    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:04.261734    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:04.302555    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:04.302565    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:04.342542    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:04.342553    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:04.354679    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:04.354690    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:04.366636    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:04.366647    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:04.380829    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:04.380839    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:04.392223    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:04.392234    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:04.417755    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:04.417762    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:04.422562    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:04.422569    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:04.437043    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:04.437054    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:04.448864    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:04.448874    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:04.466457    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:04.466470    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:04.477698    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:04.477712    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:06.992246    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:08.300124    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:08.300246    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:08.310940    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:08.311019    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:08.321551    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:08.321618    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:08.332618    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:08.332684    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:08.342798    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:08.342861    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:08.353033    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:08.353101    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:08.363183    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:08.363249    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:08.373409    6239 logs.go:276] 0 containers: []
	W0906 12:27:08.373422    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:08.373470    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:08.384200    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:08.384220    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:08.384226    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:08.401476    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:08.401487    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:08.416745    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:08.416758    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:08.428937    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:08.428949    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:08.448258    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:08.448270    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:08.463188    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:08.463201    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:08.474432    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:08.474447    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:08.489364    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:08.489375    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:08.504069    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:08.504079    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:08.515349    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:08.515359    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:08.527266    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:08.527278    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:08.541322    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:08.541332    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:08.575184    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:08.575198    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:08.613396    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:08.613406    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:08.625121    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:08.625131    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:08.648772    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:08.648780    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:08.687544    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:08.687554    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:11.994283    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:11.994578    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:12.026094    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:12.026225    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:12.046733    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:12.046820    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:12.067075    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:12.067148    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:12.083185    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:12.083260    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:11.193649    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:12.103742    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:12.103815    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:12.126499    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:12.126570    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:12.138062    6165 logs.go:276] 0 containers: []
	W0906 12:27:12.138077    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:12.138137    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:12.148999    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:12.149018    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:12.149024    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:12.184422    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:12.184435    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:12.196927    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:12.196944    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:12.209757    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:12.209768    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:12.214321    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:12.214328    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:12.233254    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:12.233265    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:12.251661    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:12.251671    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:12.275964    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:12.275972    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:12.318487    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:12.318503    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:12.331142    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:12.331155    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:12.345270    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:12.345282    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:12.356158    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:12.356169    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:12.367171    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:12.367185    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:12.380903    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:12.380917    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:12.394896    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:12.394909    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:12.406679    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:12.406693    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:14.921125    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:16.195786    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:16.195880    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:16.207189    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:16.207267    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:16.217332    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:16.217405    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:16.227919    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:16.227976    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:16.238405    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:16.238475    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:16.248996    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:16.249052    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:16.259498    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:16.259568    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:16.271651    6239 logs.go:276] 0 containers: []
	W0906 12:27:16.271663    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:16.271713    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:16.282235    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:16.282253    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:16.282258    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:16.319332    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:16.319340    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:16.353942    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:16.353953    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:16.367992    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:16.368004    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:16.379325    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:16.379338    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:16.416437    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:16.416446    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:16.427833    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:16.427844    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:16.439398    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:16.439410    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:16.451468    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:16.451478    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:16.463984    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:16.463995    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:16.473058    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:16.473066    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:16.486877    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:16.486890    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:16.499673    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:16.499686    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:16.514195    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:16.514206    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:16.536856    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:16.536867    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:16.550326    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:16.550337    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:16.564772    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:16.564783    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:19.091295    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:19.923871    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:19.924225    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:19.959644    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:19.959780    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:19.980436    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:19.980546    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:19.995747    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:19.995817    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:20.008964    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:20.009030    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:20.020355    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:20.020421    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:20.031597    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:20.031659    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:20.042178    6165 logs.go:276] 0 containers: []
	W0906 12:27:20.042189    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:20.042239    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:20.054219    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:20.054236    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:20.054242    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:20.071697    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:20.071707    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:20.086545    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:20.086555    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:20.098331    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:20.098342    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:20.109884    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:20.109898    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:20.121691    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:20.121704    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:20.156322    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:20.156335    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:20.171703    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:20.171714    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:20.196960    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:20.196970    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:20.239155    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:20.239164    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:20.250957    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:20.250969    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:20.268060    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:20.268072    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:20.280018    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:20.280030    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:20.291169    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:20.291184    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:20.302695    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:20.302708    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:20.307603    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:20.307611    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:24.093761    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:24.094080    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:24.127919    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:24.128044    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:24.147039    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:24.147136    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:24.167029    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:24.167111    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:24.179377    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:24.179459    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:24.200873    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:24.200946    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:24.212705    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:24.212776    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:24.223231    6239 logs.go:276] 0 containers: []
	W0906 12:27:24.223243    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:24.223297    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:24.234140    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:24.234157    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:24.234165    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:24.271365    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:24.271377    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:24.285325    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:24.285337    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:24.299858    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:24.299869    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:24.310681    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:24.310693    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:24.347919    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:24.347930    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:24.352345    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:24.352351    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:24.364141    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:24.364150    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:24.378314    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:24.378327    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:24.392157    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:24.392169    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:24.406358    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:24.406372    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:24.417865    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:24.417876    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:24.453598    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:24.453610    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:24.472686    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:24.472697    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:24.490854    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:24.490866    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:24.513427    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:24.513436    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:22.821790    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:24.528130    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:24.528144    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:27.041644    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:27.824155    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:27.824599    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:27.867355    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:27.867487    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:27.888291    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:27.888389    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:27.903091    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:27.903166    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:27.915625    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:27.915702    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:27.925934    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:27.925995    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:27.936689    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:27.936758    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:27.947054    6165 logs.go:276] 0 containers: []
	W0906 12:27:27.947065    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:27.947129    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:27.958767    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:27.958786    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:27.958791    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:28.002251    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:28.002278    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:28.036818    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:28.036834    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:28.048789    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:28.048805    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:28.062238    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:28.062250    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:28.079790    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:28.079802    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:28.084117    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:28.084126    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:28.095574    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:28.095587    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:28.107363    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:28.107373    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:28.121436    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:28.121447    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:28.133400    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:28.133413    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:28.144396    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:28.144410    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:28.168264    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:28.168274    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:28.183462    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:28.183478    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:28.198570    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:28.198586    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:28.213967    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:28.213979    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:30.727876    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:32.044000    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:32.044158    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:32.064398    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:32.064497    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:32.080804    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:32.080882    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:32.094433    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:32.094504    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:32.105408    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:32.105486    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:32.115889    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:32.115962    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:32.126678    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:32.126739    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:32.139741    6239 logs.go:276] 0 containers: []
	W0906 12:27:32.139756    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:32.139806    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:32.150575    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:32.150594    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:32.150601    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:32.173921    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:32.173929    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:32.210085    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:32.210099    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:32.224302    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:32.224313    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:32.241881    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:32.241892    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:32.254678    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:32.254693    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:32.269207    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:32.269217    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:32.287163    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:32.287178    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:32.299792    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:32.299803    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:32.311980    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:32.311993    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:32.316715    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:32.316722    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:32.356044    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:32.356057    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:32.372265    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:32.372276    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:32.393075    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:32.393089    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:32.432715    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:32.432725    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:32.453340    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:32.453352    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:32.467720    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:32.467730    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:35.729391    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:35.729591    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:35.751455    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:35.751581    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:35.768170    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:35.768255    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:35.782955    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:35.783050    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:35.797428    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:35.797496    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:35.807866    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:35.807927    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:35.818526    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:35.818586    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:35.828766    6165 logs.go:276] 0 containers: []
	W0906 12:27:35.828779    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:35.828831    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:35.844173    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:35.844198    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:35.844203    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:35.858457    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:35.858468    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:35.875672    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:35.875682    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:35.891221    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:35.891232    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:35.927599    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:35.927609    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:35.938986    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:35.938998    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:35.951346    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:35.951356    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:35.955849    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:35.955856    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:35.972119    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:35.972133    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:35.983894    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:35.983908    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:36.008827    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:36.008842    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:36.022710    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:36.022720    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:36.037289    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:36.037299    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:36.048884    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:36.048898    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:36.061653    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:36.061663    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:36.074464    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:36.074476    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:34.981130    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:38.619774    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:39.983439    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:39.983760    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:40.020062    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:40.020189    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:40.037528    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:40.037605    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:40.053814    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:40.053893    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:40.065142    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:40.065209    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:40.078270    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:40.078335    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:40.089259    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:40.089324    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:40.099344    6239 logs.go:276] 0 containers: []
	W0906 12:27:40.099357    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:40.099412    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:40.109866    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:40.109885    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:40.109890    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:40.149317    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:40.149328    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:40.183675    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:40.183689    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:40.195709    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:40.195723    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:40.206993    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:40.207005    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:40.219212    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:40.219226    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:40.233121    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:40.233135    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:40.248216    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:40.248227    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:40.261386    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:40.261398    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:40.294957    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:40.294969    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:40.307361    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:40.307376    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:40.319873    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:40.319886    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:40.330764    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:40.330775    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:40.334795    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:40.334803    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:40.348490    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:40.348504    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:40.387401    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:40.387412    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:40.404308    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:40.404318    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:42.929334    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:43.622007    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:43.622225    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:43.660716    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:43.660812    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:43.675872    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:43.675949    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:43.687793    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:43.687856    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:43.698493    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:43.698562    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:43.709065    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:43.709136    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:43.719518    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:43.719587    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:43.732828    6165 logs.go:276] 0 containers: []
	W0906 12:27:43.732839    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:43.732897    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:43.742805    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:43.742828    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:43.742834    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:43.754037    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:43.754049    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:43.766818    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:43.766828    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:43.778544    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:43.778555    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:43.822245    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:43.822253    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:43.826822    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:43.826831    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:43.844833    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:43.844844    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:43.856038    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:43.856052    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:43.879968    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:43.879976    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:43.918429    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:43.918442    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:43.932583    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:43.932594    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:43.944499    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:43.944510    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:43.955986    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:43.955997    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:43.969262    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:43.969275    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:43.981531    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:43.981541    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:44.003296    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:44.003313    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:46.519684    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:47.931675    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:47.932070    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:47.973368    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:47.973496    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:47.999691    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:47.999797    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:48.014248    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:48.014324    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:48.027740    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:48.027804    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:48.038417    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:48.038493    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:48.049548    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:48.049620    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:48.060031    6239 logs.go:276] 0 containers: []
	W0906 12:27:48.060043    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:48.060101    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:48.071922    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:48.071943    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:48.071950    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:48.076669    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:48.076678    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:48.095061    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:48.095072    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:48.108264    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:48.108275    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:48.123175    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:48.123187    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:48.135082    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:48.135093    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:48.146671    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:48.146685    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:48.158104    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:48.158116    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:48.198241    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:48.198254    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:48.217613    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:48.217623    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:48.229138    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:48.229149    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:48.247315    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:48.247327    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:48.260732    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:48.260743    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:48.294654    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:48.294666    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:48.334814    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:48.334835    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:48.349117    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:48.349130    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:48.360825    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:48.360837    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:51.522428    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:51.522617    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:51.541224    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:51.541308    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:51.554851    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:51.554929    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:51.565899    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:51.565970    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:51.577012    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:51.577087    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:51.587312    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:51.587383    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:51.598306    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:51.598371    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:51.608657    6165 logs.go:276] 0 containers: []
	W0906 12:27:51.608667    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:51.608721    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:51.619031    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:51.619049    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:51.619056    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:51.631042    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:51.631054    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:51.643381    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:51.643391    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:51.654838    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:51.654849    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:27:51.668656    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:51.668670    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:51.687352    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:51.687367    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:51.699331    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:51.699342    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:51.735773    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:51.735786    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:51.747580    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:51.747595    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:51.762162    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:51.762174    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:51.776728    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:51.776739    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:51.788055    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:51.788070    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:51.805814    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:51.805830    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:51.817247    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:51.817259    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:51.839948    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:51.839956    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:51.880078    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:51.880089    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:50.886534    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:54.386285    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:55.889154    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:55.889402    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:55.918344    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:55.918465    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:55.936710    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:55.936798    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:55.954733    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:55.954804    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:55.965815    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:55.965885    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:55.976015    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:55.976082    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:55.986478    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:55.986545    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:55.996436    6239 logs.go:276] 0 containers: []
	W0906 12:27:55.996448    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:55.996507    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:56.010652    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:56.010672    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:56.010678    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:56.025238    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:56.025253    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:56.029798    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:56.029805    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:56.043734    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:56.043745    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:56.055253    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:56.055265    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:56.069762    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:56.069772    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:56.082400    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:56.082410    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:56.120351    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:56.120362    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:56.133951    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:56.133963    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:56.145088    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:56.145100    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:56.163228    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:56.163238    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:56.174379    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:56.174391    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:56.197078    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:56.197087    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:56.235052    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:56.235063    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:56.269903    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:56.269915    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:56.282146    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:56.282159    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:56.298849    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:56.298863    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:58.813222    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:59.389079    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:59.389460    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:59.429566    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:27:59.429698    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:59.459058    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:27:59.459159    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:59.475910    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:27:59.475984    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:59.488288    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:27:59.488362    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:59.498758    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:27:59.498820    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:59.509725    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:27:59.509796    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:59.519989    6165 logs.go:276] 0 containers: []
	W0906 12:27:59.520000    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:59.520057    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:59.530533    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:27:59.530553    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:27:59.530559    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:27:59.544470    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:27:59.544487    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:27:59.556824    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:27:59.556835    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:27:59.569151    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:27:59.569165    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:27:59.580836    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:27:59.580847    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:27:59.592363    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:27:59.592373    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:27:59.606201    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:27:59.606215    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:27:59.620728    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:59.620738    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:59.643738    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:27:59.643747    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:59.657173    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:59.657187    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:59.700098    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:59.700110    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:59.704353    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:27:59.704359    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:27:59.720709    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:27:59.720720    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:27:59.732210    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:27:59.732219    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:27:59.751939    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:59.751953    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:59.785985    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:27:59.785996    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:28:03.815519    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:03.815783    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:03.843059    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:03.843180    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:03.860346    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:03.860419    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:03.873481    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:03.873558    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:03.885184    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:03.885249    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:03.896026    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:03.896090    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:03.906338    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:03.906405    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:03.915995    6239 logs.go:276] 0 containers: []
	W0906 12:28:03.916008    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:03.916066    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:03.926345    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:03.926364    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:03.926370    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:03.964903    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:03.964915    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:03.982997    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:03.983010    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:03.994390    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:03.994402    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:04.011038    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:04.011051    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:04.028914    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:04.028926    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:04.046406    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:04.046418    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:04.059121    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:04.059134    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:04.073703    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:04.073713    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:04.085179    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:04.085190    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:04.097673    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:04.097685    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:04.108979    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:04.108991    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:04.133447    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:04.133454    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:04.171682    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:04.171691    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:04.175675    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:04.175683    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:04.214207    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:04.214218    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:04.228227    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:04.228241    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:02.308397    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:06.742195    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:07.311065    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:07.311432    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:07.354117    6165 logs.go:276] 2 containers: [53eba817c413 e069c433a27b]
	I0906 12:28:07.354244    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:07.374981    6165 logs.go:276] 2 containers: [3bba76f3cf6a 48519ad4d4fa]
	I0906 12:28:07.375067    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:07.389205    6165 logs.go:276] 1 containers: [77e1071c7520]
	I0906 12:28:07.389278    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:07.401407    6165 logs.go:276] 2 containers: [5cf18f5eb1c6 b4e8dbebff44]
	I0906 12:28:07.401475    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:07.411857    6165 logs.go:276] 1 containers: [c52314ee17aa]
	I0906 12:28:07.411919    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:07.422054    6165 logs.go:276] 2 containers: [86e56ef26926 faa16963515f]
	I0906 12:28:07.422118    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:07.432260    6165 logs.go:276] 0 containers: []
	W0906 12:28:07.432271    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:07.432322    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:07.443467    6165 logs.go:276] 1 containers: [1d2fb79285ea]
	I0906 12:28:07.443484    6165 logs.go:123] Gathering logs for kube-apiserver [53eba817c413] ...
	I0906 12:28:07.443489    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53eba817c413"
	I0906 12:28:07.457563    6165 logs.go:123] Gathering logs for kube-apiserver [e069c433a27b] ...
	I0906 12:28:07.457574    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e069c433a27b"
	I0906 12:28:07.470041    6165 logs.go:123] Gathering logs for coredns [77e1071c7520] ...
	I0906 12:28:07.470051    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77e1071c7520"
	I0906 12:28:07.481341    6165 logs.go:123] Gathering logs for kube-scheduler [5cf18f5eb1c6] ...
	I0906 12:28:07.481357    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cf18f5eb1c6"
	I0906 12:28:07.492937    6165 logs.go:123] Gathering logs for kube-scheduler [b4e8dbebff44] ...
	I0906 12:28:07.492947    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4e8dbebff44"
	I0906 12:28:07.505790    6165 logs.go:123] Gathering logs for kube-controller-manager [86e56ef26926] ...
	I0906 12:28:07.505803    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86e56ef26926"
	I0906 12:28:07.525985    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:07.525995    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:07.567091    6165 logs.go:123] Gathering logs for etcd [3bba76f3cf6a] ...
	I0906 12:28:07.567102    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bba76f3cf6a"
	I0906 12:28:07.582348    6165 logs.go:123] Gathering logs for etcd [48519ad4d4fa] ...
	I0906 12:28:07.582362    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48519ad4d4fa"
	I0906 12:28:07.597091    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:28:07.597101    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:07.609154    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:07.609165    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:07.613929    6165 logs.go:123] Gathering logs for storage-provisioner [1d2fb79285ea] ...
	I0906 12:28:07.613935    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d2fb79285ea"
	I0906 12:28:07.625264    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:07.625274    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:07.648923    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:07.648931    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:07.683972    6165 logs.go:123] Gathering logs for kube-proxy [c52314ee17aa] ...
	I0906 12:28:07.683986    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52314ee17aa"
	I0906 12:28:07.696470    6165 logs.go:123] Gathering logs for kube-controller-manager [faa16963515f] ...
	I0906 12:28:07.696480    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faa16963515f"
	I0906 12:28:10.210341    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:11.744521    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:11.744671    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:11.755902    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:11.755974    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:11.766444    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:11.766514    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:11.777228    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:11.777286    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:11.787564    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:11.787628    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:11.798015    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:11.798086    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:11.808732    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:11.808793    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:11.824593    6239 logs.go:276] 0 containers: []
	W0906 12:28:11.824606    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:11.824656    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:11.835014    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:11.835034    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:11.835039    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:11.847235    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:11.847249    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:11.870285    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:11.870296    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:11.883692    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:11.883702    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:11.922660    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:11.922671    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:11.934106    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:11.934118    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:11.948803    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:11.948816    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:11.960535    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:11.960546    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:11.995572    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:11.995583    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:12.010071    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:12.010082    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:12.049115    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:12.049127    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:12.062079    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:12.062093    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:12.077159    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:12.077173    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:12.088842    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:12.088853    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:12.107028    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:12.107039    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:12.119874    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:12.119884    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:12.124511    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:12.124520    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:15.210949    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:15.211083    6165 kubeadm.go:597] duration metric: took 4m3.438961209s to restartPrimaryControlPlane
	W0906 12:28:15.211221    6165 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 12:28:15.211284    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 12:28:16.255514    6165 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.044222125s)
	I0906 12:28:16.255580    6165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:28:16.260545    6165 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:28:16.263289    6165 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:28:16.266999    6165 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:28:16.267007    6165 kubeadm.go:157] found existing configuration files:
	
	I0906 12:28:16.267033    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/admin.conf
	I0906 12:28:16.270164    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:28:16.270190    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 12:28:16.273181    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/kubelet.conf
	I0906 12:28:16.275746    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:28:16.275767    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 12:28:16.278541    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/controller-manager.conf
	I0906 12:28:16.281406    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:28:16.281425    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 12:28:16.283884    6165 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/scheduler.conf
	I0906 12:28:16.286450    6165 kubeadm.go:163] "https://control-plane.minikube.internal:50251" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50251 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:28:16.286471    6165 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 12:28:16.289511    6165 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 12:28:16.307848    6165 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0906 12:28:16.308004    6165 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 12:28:16.355359    6165 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 12:28:16.355413    6165 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 12:28:16.355463    6165 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 12:28:16.405421    6165 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 12:28:16.409999    6165 out.go:235]   - Generating certificates and keys ...
	I0906 12:28:16.410030    6165 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 12:28:16.410055    6165 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 12:28:16.410086    6165 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 12:28:16.410128    6165 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 12:28:16.410162    6165 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 12:28:16.410231    6165 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 12:28:16.410284    6165 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 12:28:16.410320    6165 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 12:28:16.410360    6165 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 12:28:16.410398    6165 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 12:28:16.410418    6165 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 12:28:16.410446    6165 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 12:28:16.475440    6165 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 12:28:16.526977    6165 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 12:28:16.558166    6165 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 12:28:16.649479    6165 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 12:28:16.677194    6165 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 12:28:16.677613    6165 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 12:28:16.677683    6165 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 12:28:16.759940    6165 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 12:28:16.764170    6165 out.go:235]   - Booting up control plane ...
	I0906 12:28:16.764226    6165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 12:28:16.764264    6165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 12:28:16.764297    6165 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 12:28:16.767829    6165 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 12:28:16.768730    6165 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 12:28:14.640599    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:20.770543    6165 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001661 seconds
	I0906 12:28:20.770603    6165 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 12:28:20.773847    6165 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 12:28:21.283733    6165 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 12:28:21.283876    6165 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-549000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 12:28:21.787552    6165 kubeadm.go:310] [bootstrap-token] Using token: utk6ba.0o0w4nted8qb1736
	I0906 12:28:21.792070    6165 out.go:235]   - Configuring RBAC rules ...
	I0906 12:28:21.792133    6165 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 12:28:21.800621    6165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 12:28:21.802819    6165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 12:28:21.803805    6165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 12:28:21.804625    6165 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 12:28:21.805689    6165 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 12:28:21.808863    6165 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 12:28:22.010629    6165 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 12:28:22.202245    6165 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 12:28:22.202720    6165 kubeadm.go:310] 
	I0906 12:28:22.202753    6165 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 12:28:22.202757    6165 kubeadm.go:310] 
	I0906 12:28:22.202794    6165 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 12:28:22.202797    6165 kubeadm.go:310] 
	I0906 12:28:22.202809    6165 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 12:28:22.202837    6165 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 12:28:22.202867    6165 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 12:28:22.202869    6165 kubeadm.go:310] 
	I0906 12:28:22.202896    6165 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 12:28:22.202898    6165 kubeadm.go:310] 
	I0906 12:28:22.202924    6165 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 12:28:22.202927    6165 kubeadm.go:310] 
	I0906 12:28:22.202953    6165 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 12:28:22.202990    6165 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 12:28:22.203040    6165 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 12:28:22.203044    6165 kubeadm.go:310] 
	I0906 12:28:22.203084    6165 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 12:28:22.203122    6165 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 12:28:22.203127    6165 kubeadm.go:310] 
	I0906 12:28:22.203168    6165 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token utk6ba.0o0w4nted8qb1736 \
	I0906 12:28:22.203235    6165 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59dd8c6c8a0580995b4e71517efb0052c1c9fa2a3d3304f8b7b5bc84a0bff0c5 \
	I0906 12:28:22.203259    6165 kubeadm.go:310] 	--control-plane 
	I0906 12:28:22.203264    6165 kubeadm.go:310] 
	I0906 12:28:22.203307    6165 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 12:28:22.203310    6165 kubeadm.go:310] 
	I0906 12:28:22.203362    6165 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token utk6ba.0o0w4nted8qb1736 \
	I0906 12:28:22.203414    6165 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59dd8c6c8a0580995b4e71517efb0052c1c9fa2a3d3304f8b7b5bc84a0bff0c5 
	I0906 12:28:22.203473    6165 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 12:28:22.203534    6165 cni.go:84] Creating CNI manager for ""
	I0906 12:28:22.203543    6165 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:28:22.207152    6165 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 12:28:22.214399    6165 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 12:28:22.217488    6165 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 12:28:22.222375    6165 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 12:28:22.222412    6165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:28:22.222456    6165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-549000 minikube.k8s.io/updated_at=2024_09_06T12_28_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=running-upgrade-549000 minikube.k8s.io/primary=true
	I0906 12:28:22.259452    6165 ops.go:34] apiserver oom_adj: -16
	I0906 12:28:22.259581    6165 kubeadm.go:1113] duration metric: took 37.20025ms to wait for elevateKubeSystemPrivileges
	I0906 12:28:22.272091    6165 kubeadm.go:394] duration metric: took 4m10.514098792s to StartCluster
	I0906 12:28:22.272109    6165 settings.go:142] acquiring lock: {Name:mk12afd771d0c660db2e89d96a6968c1a28fb2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:22.272202    6165 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:28:22.273544    6165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/kubeconfig: {Name:mkb103f2b581179fd959f22a1dc4c9c6720f9366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:22.273793    6165 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:22.273810    6165 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:28:22.273859    6165 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-549000"
	I0906 12:28:22.273873    6165 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-549000"
	I0906 12:28:22.273876    6165 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-549000"
	I0906 12:28:22.273889    6165 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-549000"
	W0906 12:28:22.273893    6165 addons.go:243] addon storage-provisioner should already be in state true
	I0906 12:28:22.273902    6165 config.go:182] Loaded profile config "running-upgrade-549000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0906 12:28:22.273911    6165 host.go:66] Checking if "running-upgrade-549000" exists ...
	I0906 12:28:22.274725    6165 kapi.go:59] client config for running-upgrade-549000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/running-upgrade-549000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101b27f80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:28:22.274852    6165 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-549000"
	W0906 12:28:22.274856    6165 addons.go:243] addon default-storageclass should already be in state true
	I0906 12:28:22.274865    6165 host.go:66] Checking if "running-upgrade-549000" exists ...
	I0906 12:28:22.278415    6165 out.go:177] * Verifying Kubernetes components...
	I0906 12:28:22.278787    6165 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 12:28:22.278792    6165 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 12:28:22.278797    6165 sshutil.go:53] new ssh client: &{IP:localhost Port:50219 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/running-upgrade-549000/id_rsa Username:docker}
	I0906 12:28:22.286273    6165 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:28:19.641050    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:19.641151    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:19.652823    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:19.652905    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:19.664808    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:19.664892    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:19.677317    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:19.677389    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:19.689762    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:19.689854    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:19.702147    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:19.702217    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:19.713763    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:19.713848    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:19.726268    6239 logs.go:276] 0 containers: []
	W0906 12:28:19.726281    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:19.726341    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:19.737151    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:19.737169    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:19.737176    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:19.762459    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:19.762470    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:19.799989    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:19.800005    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:19.834562    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:19.834575    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:19.847946    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:19.847957    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:19.867637    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:19.867650    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:19.879113    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:19.879125    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:19.893691    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:19.893703    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:19.909277    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:19.909286    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:19.921100    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:19.921114    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:19.940097    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:19.940108    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:19.944422    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:19.944431    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:19.959200    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:19.959212    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:19.996471    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:19.996485    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:20.013598    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:20.013608    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:20.025278    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:20.025289    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:20.036970    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:20.036984    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:22.552781    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:22.290371    6165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:28:22.294383    6165 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:28:22.294390    6165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 12:28:22.294396    6165 sshutil.go:53] new ssh client: &{IP:localhost Port:50219 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/running-upgrade-549000/id_rsa Username:docker}
	I0906 12:28:22.366386    6165 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:28:22.371589    6165 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:28:22.371635    6165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:28:22.376609    6165 api_server.go:72] duration metric: took 102.802334ms to wait for apiserver process to appear ...
	I0906 12:28:22.376616    6165 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:28:22.376623    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:22.382773    6165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 12:28:22.405738    6165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:28:22.746441    6165 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:28:22.746455    6165 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:28:27.555060    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:27.555363    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:27.588969    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:27.589087    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:27.607224    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:27.607303    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:27.622646    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:27.622722    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:27.634616    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:27.634691    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:27.646796    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:27.646865    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:27.658953    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:27.659014    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:27.669420    6239 logs.go:276] 0 containers: []
	W0906 12:28:27.669438    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:27.669506    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:27.680235    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:27.680260    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:27.680265    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:27.719240    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:27.719249    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:27.733065    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:27.733077    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:27.771353    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:27.771365    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:27.786001    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:27.786011    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:27.799022    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:27.799033    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:27.834237    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:27.834249    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:27.846984    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:27.846994    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:27.858420    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:27.858432    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:27.869733    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:27.869744    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:27.886824    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:27.886837    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:27.891249    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:27.891257    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:27.902967    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:27.902980    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:27.915512    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:27.915523    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:27.927179    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:27.927193    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:27.941572    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:27.941583    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:27.957511    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:27.957520    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:27.378913    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:27.378995    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:30.481853    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:32.380021    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:32.380061    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:35.484124    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:35.484229    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:35.497336    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:35.497407    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:35.507483    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:35.507555    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:35.517816    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:35.517889    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:35.531340    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:35.531411    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:35.543604    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:35.543679    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:35.554306    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:35.554379    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:35.564469    6239 logs.go:276] 0 containers: []
	W0906 12:28:35.564481    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:35.564538    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:35.575862    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:35.575882    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:35.575888    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:35.593700    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:35.593710    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:35.604979    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:35.604990    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:35.644759    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:35.644782    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:35.656311    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:35.656326    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:35.695302    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:35.695318    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:35.723789    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:35.723812    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:35.748034    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:35.748048    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:35.760279    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:35.760290    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:35.795928    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:35.795944    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:35.811325    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:35.811339    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:35.822909    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:35.822923    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:35.836318    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:35.836333    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:35.853073    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:35.853085    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:35.871228    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:35.871241    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:35.895543    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:35.895561    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:35.900111    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:35.900118    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:38.414388    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:37.380621    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:37.380658    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:43.415048    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:43.415285    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:43.436154    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:43.436255    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:43.451360    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:43.451452    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:43.463733    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:43.463801    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:43.474754    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:43.474817    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:43.485359    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:43.485429    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:43.496549    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:43.496611    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:43.510656    6239 logs.go:276] 0 containers: []
	W0906 12:28:43.510668    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:43.510719    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:43.522188    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:43.522212    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:43.522217    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:43.533938    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:43.533949    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:43.546415    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:43.546426    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:43.558717    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:43.558728    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:43.569663    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:43.569676    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:43.591148    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:43.591158    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:43.605419    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:43.605431    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:43.627626    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:43.627637    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:43.645441    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:43.645452    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:43.658137    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:43.658149    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:43.662161    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:43.662171    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:43.676082    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:43.676095    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:43.714506    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:43.714519    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:43.730940    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:43.730950    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:43.770616    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:43.770626    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:43.805198    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:43.805213    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:43.817107    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:43.817120    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:42.381726    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:42.381767    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:46.329316    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:47.382820    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:47.382883    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:51.331581    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:51.331632    6239 kubeadm.go:597] duration metric: took 4m3.952686542s to restartPrimaryControlPlane
	W0906 12:28:51.331673    6239 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 12:28:51.331689    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 12:28:52.339634    6239 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.007937375s)
	I0906 12:28:52.339697    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:28:52.345165    6239 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:28:52.348064    6239 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:28:52.350768    6239 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:28:52.350775    6239 kubeadm.go:157] found existing configuration files:
	
	I0906 12:28:52.350798    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/admin.conf
	I0906 12:28:52.353135    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:28:52.353154    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 12:28:52.356091    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/kubelet.conf
	I0906 12:28:52.359312    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:28:52.359338    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 12:28:52.362301    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/controller-manager.conf
	I0906 12:28:52.364811    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:28:52.364831    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 12:28:52.368065    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/scheduler.conf
	I0906 12:28:52.371188    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:28:52.371209    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 12:28:52.373804    6239 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 12:28:52.390936    6239 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0906 12:28:52.391066    6239 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 12:28:52.442154    6239 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 12:28:52.442208    6239 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 12:28:52.442274    6239 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 12:28:52.493012    6239 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 12:28:52.383539    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:52.383558    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0906 12:28:52.748298    6165 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0906 12:28:52.752353    6165 out.go:177] * Enabled addons: storage-provisioner
	I0906 12:28:52.497246    6239 out.go:235]   - Generating certificates and keys ...
	I0906 12:28:52.497288    6239 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 12:28:52.497321    6239 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 12:28:52.497361    6239 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 12:28:52.497393    6239 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 12:28:52.497434    6239 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 12:28:52.497464    6239 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 12:28:52.497495    6239 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 12:28:52.497529    6239 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 12:28:52.497572    6239 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 12:28:52.497614    6239 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 12:28:52.497633    6239 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 12:28:52.497664    6239 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 12:28:52.653103    6239 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 12:28:52.812821    6239 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 12:28:52.875197    6239 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 12:28:53.197904    6239 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 12:28:53.227852    6239 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 12:28:53.228234    6239 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 12:28:53.228328    6239 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 12:28:53.313372    6239 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 12:28:53.317584    6239 out.go:235]   - Booting up control plane ...
	I0906 12:28:53.317632    6239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 12:28:53.317673    6239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 12:28:53.317710    6239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 12:28:53.317762    6239 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 12:28:53.317868    6239 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 12:28:52.760521    6165 addons.go:510] duration metric: took 30.486935167s for enable addons: enabled=[storage-provisioner]
	I0906 12:28:58.317133    6239 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001704 seconds
	I0906 12:28:58.317201    6239 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 12:28:58.321265    6239 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 12:28:58.834724    6239 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 12:28:58.834841    6239 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-236000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 12:28:59.345158    6239 kubeadm.go:310] [bootstrap-token] Using token: im3wc3.8qcj48hgtkbbi7sm
	I0906 12:28:59.348935    6239 out.go:235]   - Configuring RBAC rules ...
	I0906 12:28:59.348989    6239 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 12:28:59.349032    6239 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 12:28:59.355694    6239 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 12:28:59.356323    6239 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 12:28:59.357269    6239 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 12:28:59.358231    6239 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 12:28:59.361308    6239 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 12:28:59.543231    6239 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 12:28:59.748980    6239 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 12:28:59.749581    6239 kubeadm.go:310] 
	I0906 12:28:59.749612    6239 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 12:28:59.749615    6239 kubeadm.go:310] 
	I0906 12:28:59.749655    6239 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 12:28:59.749658    6239 kubeadm.go:310] 
	I0906 12:28:59.749677    6239 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 12:28:59.749708    6239 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 12:28:59.749731    6239 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 12:28:59.749733    6239 kubeadm.go:310] 
	I0906 12:28:59.749788    6239 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 12:28:59.749793    6239 kubeadm.go:310] 
	I0906 12:28:59.749820    6239 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 12:28:59.749823    6239 kubeadm.go:310] 
	I0906 12:28:59.749846    6239 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 12:28:59.749881    6239 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 12:28:59.749916    6239 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 12:28:59.749919    6239 kubeadm.go:310] 
	I0906 12:28:59.749965    6239 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 12:28:59.750008    6239 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 12:28:59.750012    6239 kubeadm.go:310] 
	I0906 12:28:59.750053    6239 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token im3wc3.8qcj48hgtkbbi7sm \
	I0906 12:28:59.750113    6239 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59dd8c6c8a0580995b4e71517efb0052c1c9fa2a3d3304f8b7b5bc84a0bff0c5 \
	I0906 12:28:59.750124    6239 kubeadm.go:310] 	--control-plane 
	I0906 12:28:59.750128    6239 kubeadm.go:310] 
	I0906 12:28:59.750174    6239 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 12:28:59.750179    6239 kubeadm.go:310] 
	I0906 12:28:59.750223    6239 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token im3wc3.8qcj48hgtkbbi7sm \
	I0906 12:28:59.750273    6239 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59dd8c6c8a0580995b4e71517efb0052c1c9fa2a3d3304f8b7b5bc84a0bff0c5 
	I0906 12:28:59.750356    6239 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 12:28:59.750364    6239 cni.go:84] Creating CNI manager for ""
	I0906 12:28:59.750373    6239 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:28:59.754790    6239 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 12:28:59.761781    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 12:28:59.764678    6239 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 12:28:59.772060    6239 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 12:28:59.772149    6239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-236000 minikube.k8s.io/updated_at=2024_09_06T12_28_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=stopped-upgrade-236000 minikube.k8s.io/primary=true
	I0906 12:28:59.772173    6239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:28:59.782331    6239 ops.go:34] apiserver oom_adj: -16
	I0906 12:28:59.820613    6239 kubeadm.go:1113] duration metric: took 48.496208ms to wait for elevateKubeSystemPrivileges
	I0906 12:28:59.820722    6239 kubeadm.go:394] duration metric: took 4m12.456970333s to StartCluster
	I0906 12:28:59.820734    6239 settings.go:142] acquiring lock: {Name:mk12afd771d0c660db2e89d96a6968c1a28fb2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:59.820813    6239 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:28:59.821252    6239 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/kubeconfig: {Name:mkb103f2b581179fd959f22a1dc4c9c6720f9366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:59.821437    6239 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:59.821534    6239 config.go:182] Loaded profile config "stopped-upgrade-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0906 12:28:59.821493    6239 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:28:59.821550    6239 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-236000"
	I0906 12:28:59.821565    6239 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-236000"
	W0906 12:28:59.821571    6239 addons.go:243] addon storage-provisioner should already be in state true
	I0906 12:28:59.821583    6239 host.go:66] Checking if "stopped-upgrade-236000" exists ...
	I0906 12:28:59.821569    6239 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-236000"
	I0906 12:28:59.821600    6239 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-236000"
	I0906 12:28:59.822584    6239 kapi.go:59] client config for stopped-upgrade-236000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10286bf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:28:59.822701    6239 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-236000"
	W0906 12:28:59.822705    6239 addons.go:243] addon default-storageclass should already be in state true
	I0906 12:28:59.822712    6239 host.go:66] Checking if "stopped-upgrade-236000" exists ...
	I0906 12:28:59.825795    6239 out.go:177] * Verifying Kubernetes components...
	I0906 12:28:59.826260    6239 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 12:28:59.829858    6239 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 12:28:59.829865    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	I0906 12:28:59.833783    6239 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:28:57.384991    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:57.385043    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:59.837814    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:28:59.841785    6239 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:28:59.841792    6239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 12:28:59.841800    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	I0906 12:28:59.922639    6239 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:28:59.927857    6239 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:28:59.927906    6239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:28:59.932402    6239 api_server.go:72] duration metric: took 110.953ms to wait for apiserver process to appear ...
	I0906 12:28:59.932410    6239 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:28:59.932417    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:59.962406    6239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 12:28:59.981356    6239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:29:00.294977    6239 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:29:00.294989    6239 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:29:02.386945    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:02.386969    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:04.934466    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:04.934565    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:07.389117    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:07.389147    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:09.935209    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:09.935239    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:12.391328    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:12.391349    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:14.935651    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:14.935695    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:17.393474    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:17.393499    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:19.936280    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:19.936313    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:22.395659    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:22.395801    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:22.407474    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:29:22.407544    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:22.418597    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:29:22.418669    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:22.429202    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:29:22.429271    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:22.439236    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:29:22.439309    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:29:22.449906    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:29:22.449978    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:29:22.461027    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:29:22.461094    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:29:22.472625    6165 logs.go:276] 0 containers: []
	W0906 12:29:22.472637    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:29:22.472695    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:29:22.482948    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:29:22.482962    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:29:22.482967    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:29:22.497121    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:29:22.497135    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:29:22.512327    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:29:22.512341    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:29:22.529647    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:29:22.529663    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:29:22.542753    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:29:22.542764    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:29:22.577081    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:29:22.577091    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:29:22.611574    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:29:22.611586    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:29:22.623878    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:29:22.623889    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:29:22.641020    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:29:22.641034    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:29:22.653347    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:29:22.653362    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:29:22.671842    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:29:22.671854    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:29:22.695144    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:29:22.695153    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:29:22.699418    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:29:22.699428    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:29:25.225827    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:24.937019    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:24.937053    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:29.937987    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:29.938014    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0906 12:29:30.297062    6239 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0906 12:29:30.301231    6239 out.go:177] * Enabled addons: storage-provisioner
	I0906 12:29:30.228126    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:30.228269    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:30.244065    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:29:30.244141    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:30.256957    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:29:30.257018    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:30.272171    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:29:30.272247    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:30.282767    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:29:30.282830    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:29:30.293230    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:29:30.293304    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:29:30.303782    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:29:30.303846    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:29:30.314291    6165 logs.go:276] 0 containers: []
	W0906 12:29:30.314354    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:29:30.314431    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:29:30.326182    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:29:30.326195    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:29:30.326201    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:29:30.338699    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:29:30.338711    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:29:30.363412    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:29:30.363429    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:29:30.398287    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:29:30.398303    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:29:30.413062    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:29:30.413072    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:29:30.427728    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:29:30.427744    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:29:30.439936    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:29:30.439947    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:29:30.451740    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:29:30.451750    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:29:30.468507    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:29:30.468519    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:29:30.473283    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:29:30.473291    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:29:30.509494    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:29:30.509507    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:29:30.521609    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:29:30.521619    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:29:30.540534    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:29:30.540545    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:29:30.309185    6239 addons.go:510] duration metric: took 30.48794475s for enable addons: enabled=[storage-provisioner]
	I0906 12:29:33.054611    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:34.939207    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:34.939243    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:38.056879    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:38.057065    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:38.075040    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:29:38.075123    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:38.088041    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:29:38.088113    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:38.100654    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:29:38.100721    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:38.111309    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:29:38.111385    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:29:38.121929    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:29:38.121999    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:29:38.134670    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:29:38.134735    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:29:38.145278    6165 logs.go:276] 0 containers: []
	W0906 12:29:38.145290    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:29:38.145346    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:29:38.155472    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:29:38.155486    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:29:38.155491    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:29:38.170743    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:29:38.170757    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:29:38.186354    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:29:38.186367    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:29:38.221713    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:29:38.221721    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:29:38.225928    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:29:38.225936    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:29:38.262442    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:29:38.262454    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:29:38.277238    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:29:38.277249    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:29:38.288715    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:29:38.288728    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:29:38.302388    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:29:38.302399    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:29:38.327567    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:29:38.327578    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:29:38.338826    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:29:38.338840    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:29:38.355660    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:29:38.355673    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:29:38.374963    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:29:38.374973    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:29:40.890961    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:39.940722    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:39.940763    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:45.892971    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:45.893176    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:45.918846    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:29:45.918956    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:45.936790    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:29:45.936872    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:45.950160    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:29:45.950231    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:45.962120    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:29:45.962187    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:29:45.973121    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:29:45.973182    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:29:45.983834    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:29:45.983888    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:29:45.993900    6165 logs.go:276] 0 containers: []
	W0906 12:29:45.993913    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:29:45.993966    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:29:46.005167    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:29:46.005183    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:29:46.005188    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:29:46.029283    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:29:46.029293    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:29:46.064754    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:29:46.064765    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:29:46.078633    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:29:46.078643    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:29:46.096977    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:29:46.096990    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:29:46.110374    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:29:46.110387    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:29:46.125557    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:29:46.125567    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:29:46.142765    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:29:46.142777    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:29:46.154233    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:29:46.154247    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:29:46.165483    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:29:46.165493    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:29:46.170518    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:29:46.170526    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:29:46.204875    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:29:46.204888    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:29:46.223429    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:29:46.223444    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:29:44.941243    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:44.941283    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:48.737043    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:49.943393    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:49.943445    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:53.739167    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:53.739270    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:53.750373    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:29:53.750444    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:53.760919    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:29:53.760986    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:53.771382    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:29:53.771443    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:53.782377    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:29:53.782448    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:29:53.793465    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:29:53.793537    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:29:53.804931    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:29:53.804991    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:29:53.815400    6165 logs.go:276] 0 containers: []
	W0906 12:29:53.815411    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:29:53.815470    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:29:53.826087    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:29:53.826103    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:29:53.826108    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:29:53.837725    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:29:53.837736    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:29:53.863258    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:29:53.863267    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:29:53.867464    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:29:53.867472    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:29:53.881613    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:29:53.881626    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:29:53.892892    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:29:53.892905    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:29:53.904575    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:29:53.904588    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:29:53.919314    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:29:53.919326    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:29:53.938609    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:29:53.938619    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:29:53.950090    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:29:53.950104    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:29:53.983646    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:29:53.983655    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:29:54.019402    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:29:54.019412    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:29:54.034299    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:29:54.034311    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:29:56.551827    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:54.944194    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:54.944237    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:01.554029    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:01.554207    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:01.572438    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:01.572525    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:01.585957    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:01.586026    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:01.597677    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:30:01.597747    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:01.608268    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:01.608326    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:01.618846    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:01.618920    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:01.629571    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:01.629628    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:01.639733    6165 logs.go:276] 0 containers: []
	W0906 12:30:01.639744    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:01.639801    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:01.650592    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:01.650608    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:01.650614    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:01.655548    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:01.655555    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:01.690564    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:01.690576    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:01.702822    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:01.702834    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:01.717386    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:01.717398    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:01.729281    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:01.729291    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:01.740793    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:01.740806    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:01.766374    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:01.766390    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:01.800981    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:01.801000    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:01.818778    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:01.818789    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:01.832864    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:01.832874    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:01.847398    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:01.847411    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:01.864695    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:01.864708    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:29:59.946460    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:59.946586    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:59.965682    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:29:59.965761    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:59.977302    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:29:59.977371    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:59.987624    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:29:59.987693    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:59.997921    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:29:59.997990    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:00.008105    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:00.008172    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:00.018409    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:00.018473    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:00.028561    6239 logs.go:276] 0 containers: []
	W0906 12:30:00.028573    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:00.028632    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:00.039661    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:00.039677    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:00.039683    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:00.044573    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:00.044580    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:00.079890    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:00.079901    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:00.094524    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:00.094534    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:00.119355    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:00.119369    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:00.137594    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:00.137609    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:00.149161    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:00.149174    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:00.166800    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:00.166810    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:00.178425    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:00.178435    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:00.211852    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:00.211861    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:00.226113    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:00.226123    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:00.237936    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:00.237952    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:00.249054    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:00.249066    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:02.762473    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:04.380700    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:07.764190    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:07.764358    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:07.778801    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:07.778880    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:07.790143    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:07.790210    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:07.800764    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:07.800836    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:07.810926    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:07.810995    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:07.821363    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:07.821429    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:07.838382    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:07.838453    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:07.848422    6239 logs.go:276] 0 containers: []
	W0906 12:30:07.848433    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:07.848489    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:07.863671    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:07.863685    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:07.863690    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:07.875170    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:07.875183    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:07.886628    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:07.886638    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:07.901637    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:07.901648    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:07.925232    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:07.925242    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:07.959271    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:07.959288    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:07.995603    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:07.995617    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:08.009909    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:08.009921    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:08.021923    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:08.021935    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:08.039983    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:08.039996    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:08.051703    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:08.051714    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:08.063210    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:08.063223    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:08.067475    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:08.067485    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:09.381678    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:09.381933    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:09.408416    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:09.408520    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:09.425645    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:09.425729    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:09.439563    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:30:09.439627    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:09.451199    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:09.451267    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:09.462228    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:09.462295    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:09.472553    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:09.472612    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:09.482618    6165 logs.go:276] 0 containers: []
	W0906 12:30:09.482627    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:09.482672    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:09.493045    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:09.493062    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:09.493067    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:09.507137    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:09.507150    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:09.521458    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:09.521469    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:09.533578    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:09.533590    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:09.545881    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:09.545892    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:09.563197    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:09.563211    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:09.575085    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:09.575095    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:09.600191    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:09.600201    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:09.635680    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:09.635694    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:09.647297    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:09.647308    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:09.652057    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:09.652068    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:09.673294    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:09.673305    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:09.685406    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:09.685417    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:10.589311    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:12.221103    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:15.591778    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:15.592149    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:15.628294    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:15.628404    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:15.654314    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:15.654400    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:15.667308    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:15.667377    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:15.679164    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:15.679233    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:15.690040    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:15.690109    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:15.700517    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:15.700577    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:15.710686    6239 logs.go:276] 0 containers: []
	W0906 12:30:15.710696    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:15.710749    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:15.726355    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:15.726370    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:15.726377    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:15.737866    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:15.737882    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:15.749689    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:15.749701    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:15.783815    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:15.783827    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:15.797737    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:15.797748    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:15.811112    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:15.811122    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:15.822807    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:15.822820    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:15.842020    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:15.842030    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:15.868771    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:15.868784    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:15.904087    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:15.904095    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:15.908452    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:15.908460    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:15.926544    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:15.926555    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:15.938468    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:15.938485    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:18.455621    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:17.223244    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:17.223334    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:17.239259    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:17.239348    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:17.249719    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:17.249783    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:17.260164    6165 logs.go:276] 2 containers: [c714dbf82d9d d71240f41a38]
	I0906 12:30:17.260233    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:17.271440    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:17.271510    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:17.282061    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:17.282131    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:17.292571    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:17.292633    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:17.303433    6165 logs.go:276] 0 containers: []
	W0906 12:30:17.303443    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:17.303501    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:17.314061    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:17.314076    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:17.314082    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:17.318933    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:17.318940    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:17.330885    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:17.330895    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:17.342277    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:17.342292    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:17.361851    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:17.361861    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:17.387257    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:17.387266    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:17.398748    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:17.398759    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:17.410500    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:17.410510    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:17.446401    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:17.446415    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:17.483494    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:17.483506    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:17.497850    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:17.497860    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:17.513124    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:17.513135    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:17.525526    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:17.525536    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:20.042509    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:23.457945    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:23.458139    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:23.476728    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:23.476813    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:23.495446    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:23.495525    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:23.513505    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:23.513560    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:23.524267    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:23.524336    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:23.534946    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:23.535015    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:23.545195    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:23.545258    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:23.555072    6239 logs.go:276] 0 containers: []
	W0906 12:30:23.555084    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:23.555143    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:23.569997    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:23.570012    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:23.570016    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:23.574508    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:23.574516    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:23.594074    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:23.594085    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:23.609395    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:23.609405    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:23.634982    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:23.634995    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:23.648757    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:23.648769    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:23.666083    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:23.666094    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:23.681722    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:23.681732    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:23.716899    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:23.716912    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:23.752364    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:23.752375    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:23.767243    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:23.767257    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:23.779381    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:23.779395    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:23.799739    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:23.799753    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:25.044814    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:25.044981    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:25.060857    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:25.060935    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:25.073822    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:25.073895    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:25.088170    6165 logs.go:276] 3 containers: [c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:30:25.088243    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:25.099502    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:25.099576    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:25.109928    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:25.109991    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:25.120764    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:25.120830    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:25.131126    6165 logs.go:276] 0 containers: []
	W0906 12:30:25.131140    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:25.131196    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:25.141455    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:25.141472    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:25.141477    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:25.152819    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:25.152833    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:25.167264    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:25.167274    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:25.202724    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:25.202734    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:25.214518    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:25.214531    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:25.240223    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:25.240234    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:25.252816    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:25.252830    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:25.272728    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:25.272740    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:25.285012    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:25.285024    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:25.302612    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:25.302625    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:25.307223    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:30:25.307228    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:30:25.321420    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:25.321433    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:25.333455    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:25.333470    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:25.351460    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:25.351473    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:26.313491    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:27.885647    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:31.315761    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:31.316106    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:31.363801    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:31.363899    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:31.377916    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:31.377986    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:31.389803    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:31.389871    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:31.400553    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:31.400619    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:31.412839    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:31.412909    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:31.423436    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:31.423501    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:31.433992    6239 logs.go:276] 0 containers: []
	W0906 12:30:31.434003    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:31.434057    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:31.447413    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:31.447427    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:31.447432    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:31.459021    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:31.459034    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:31.482654    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:31.482665    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:31.494735    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:31.494745    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:31.529089    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:31.529099    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:31.546817    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:31.546830    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:31.565144    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:31.565157    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:31.577110    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:31.577123    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:31.589102    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:31.589116    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:31.606511    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:31.606521    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:31.610856    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:31.610867    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:31.650872    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:31.650884    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:31.665131    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:31.665142    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:34.178781    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:32.887979    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:32.888159    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:32.908833    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:32.908941    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:32.925331    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:32.925409    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:32.938233    6165 logs.go:276] 3 containers: [c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:30:32.938302    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:32.949513    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:32.949574    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:32.960348    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:32.960412    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:32.970673    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:32.970729    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:32.981243    6165 logs.go:276] 0 containers: []
	W0906 12:30:32.981255    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:32.981300    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:32.991722    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:32.991740    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:32.991745    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:32.996308    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:32.996316    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:33.032000    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:33.032012    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:33.047239    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:33.047249    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:33.072063    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:33.072072    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:33.106733    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:33.106745    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:33.119149    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:33.119159    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:33.130869    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:30:33.130880    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:30:33.145822    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:33.145833    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:33.157340    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:33.157355    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:33.171995    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:33.172007    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:33.183801    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:33.183811    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:33.197758    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:33.197766    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:33.215264    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:33.215274    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:35.729430    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:39.181062    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:39.181463    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:39.211720    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:39.211839    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:39.230143    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:39.230241    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:39.243953    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:39.244024    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:39.256361    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:39.256422    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:39.266803    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:39.266875    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:39.277843    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:39.277912    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:39.287828    6239 logs.go:276] 0 containers: []
	W0906 12:30:39.287843    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:39.287900    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:39.298275    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:39.298292    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:39.298296    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:39.331696    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:39.331708    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:39.343617    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:39.343628    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:39.355647    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:39.355658    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:39.368580    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:39.368590    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:39.393729    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:39.393745    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:39.427394    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:39.427403    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:39.431938    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:39.431945    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:39.446398    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:39.446409    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:39.460463    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:39.460474    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:39.475932    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:39.475941    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:39.493782    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:39.493793    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:39.505479    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:39.505493    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:40.731631    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:40.731816    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:40.759069    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:40.759157    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:40.773327    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:40.773402    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:40.785669    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:30:40.785733    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:40.796578    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:40.796639    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:40.806637    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:40.806693    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:40.817507    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:40.817563    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:40.828014    6165 logs.go:276] 0 containers: []
	W0906 12:30:40.828024    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:40.828072    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:40.837954    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:40.837973    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:40.837979    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:40.854233    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:30:40.854243    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:30:40.865660    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:40.865670    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:40.891342    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:40.891349    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:40.925221    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:30:40.925231    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:30:40.937179    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:40.937192    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:40.954803    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:40.954814    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:40.966434    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:40.966444    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:40.977746    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:40.977755    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:40.990228    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:40.990239    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:41.002670    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:41.002679    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:41.017607    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:41.017618    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:41.029427    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:41.029437    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:41.034318    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:41.034325    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:41.071515    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:41.071527    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:42.019064    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:43.587466    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:47.019367    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:47.019640    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:47.044473    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:47.044591    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:47.063122    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:47.063207    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:47.076095    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:47.076169    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:47.087310    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:47.087371    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:47.097767    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:47.097834    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:47.110496    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:47.110556    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:47.120675    6239 logs.go:276] 0 containers: []
	W0906 12:30:47.120688    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:47.120748    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:47.131165    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:47.131183    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:47.131188    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:47.166103    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:47.166117    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:47.179840    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:47.179852    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:47.192589    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:47.192600    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:47.209406    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:47.209418    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:47.220586    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:47.220600    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:47.255383    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:47.255395    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:47.280341    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:47.280351    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:47.292103    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:47.292113    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:47.308085    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:47.308096    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:47.324967    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:47.324977    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:47.348701    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:47.348710    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:47.359839    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:47.359852    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:48.589692    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:48.589834    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:48.605476    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:48.605543    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:48.618650    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:48.618722    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:48.630061    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:30:48.630123    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:48.644047    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:48.644114    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:48.654970    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:48.655036    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:48.665161    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:48.665218    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:48.675232    6165 logs.go:276] 0 containers: []
	W0906 12:30:48.675243    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:48.675298    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:48.685627    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:48.685644    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:48.685649    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:48.702198    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:48.702208    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:48.713879    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:48.713888    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:48.732735    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:30:48.732747    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:30:48.744824    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:48.744835    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:48.757191    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:48.757202    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:48.772257    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:48.772267    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:48.788216    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:48.788227    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:48.811670    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:48.811683    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:48.845734    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:48.845754    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:48.854009    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:48.854021    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:48.870662    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:48.870673    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:48.904368    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:48.904380    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:48.916513    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:30:48.916526    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:30:48.927894    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:48.927904    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:51.442213    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:49.866314    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:56.444497    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:56.444690    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:56.466412    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:30:56.466512    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:56.482325    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:30:56.482408    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:56.494582    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:30:56.494646    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:56.506304    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:30:56.506364    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:56.517129    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:30:56.517207    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:56.531414    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:30:56.531483    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:56.541380    6165 logs.go:276] 0 containers: []
	W0906 12:30:56.541393    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:56.541461    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:56.551927    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:30:56.551946    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:56.551952    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:56.578071    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:30:56.578080    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:56.589799    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:30:56.589809    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:30:56.604225    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:30:56.604238    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:30:56.616179    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:30:56.616190    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:30:56.628430    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:56.628440    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:56.663724    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:30:56.663734    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:30:56.675443    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:30:56.675455    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:30:56.689524    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:30:56.689535    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:30:56.701147    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:30:56.701157    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:30:56.712998    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:30:56.713011    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:30:56.731075    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:30:56.731085    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:30:56.742258    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:56.742271    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:56.746975    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:56.746982    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:56.780780    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:30:56.780793    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:30:54.868530    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:54.868696    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:54.882823    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:54.882929    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:54.894165    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:54.894229    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:54.904812    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:54.904881    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:54.914784    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:54.914850    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:54.929262    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:54.929334    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:54.940272    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:54.940344    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:54.953085    6239 logs.go:276] 0 containers: []
	W0906 12:30:54.953099    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:54.953158    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:54.963179    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:54.963205    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:54.963211    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:54.997630    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:54.997641    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:55.012644    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:55.012657    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:55.023746    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:55.023757    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:55.035299    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:55.035310    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:55.046806    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:55.046817    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:55.059717    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:55.059729    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:55.084799    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:55.084807    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:55.118371    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:55.118382    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:55.122785    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:55.122792    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:55.143687    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:55.143702    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:55.158467    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:55.158490    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:55.170475    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:55.170485    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:57.689603    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:59.297571    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:02.691888    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:02.692075    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:02.711572    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:02.711664    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:02.725938    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:02.726008    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:02.737759    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:31:02.737831    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:02.748988    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:02.749047    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:02.759957    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:02.760038    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:02.770389    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:02.770450    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:02.781073    6239 logs.go:276] 0 containers: []
	W0906 12:31:02.781085    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:02.781137    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:02.798014    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:02.798028    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:02.798034    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:02.815077    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:02.815088    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:02.826901    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:02.826912    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:02.831160    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:02.831169    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:02.868644    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:02.868657    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:02.880203    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:02.880216    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:02.892225    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:02.892240    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:02.912506    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:02.912517    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:02.924072    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:02.924083    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:02.948346    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:02.948367    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:02.960339    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:02.960349    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:02.993800    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:02.993810    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:03.007813    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:03.007822    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:04.299806    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:04.299929    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:04.312137    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:04.312201    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:04.324376    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:04.324445    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:04.335943    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:04.336036    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:04.347121    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:04.347203    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:04.358964    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:04.359034    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:04.370825    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:04.370892    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:04.382113    6165 logs.go:276] 0 containers: []
	W0906 12:31:04.382125    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:04.382181    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:04.393407    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:04.393446    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:04.393454    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:04.408589    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:04.408604    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:04.421887    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:04.421900    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:04.434755    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:04.434769    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:04.469994    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:04.470006    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:04.482208    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:04.482221    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:04.499922    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:04.499932    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:04.524034    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:04.524044    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:04.528563    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:04.528572    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:04.540596    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:04.540606    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:04.552509    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:04.552520    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:04.586102    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:04.586113    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:04.600290    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:04.600302    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:04.614898    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:04.614927    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:04.629389    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:04.629402    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:05.525442    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:07.143718    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:10.527701    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:10.527866    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:10.540286    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:10.540365    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:10.551357    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:10.551415    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:10.562134    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:31:10.562211    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:10.573079    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:10.573139    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:10.583571    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:10.583642    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:10.593826    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:10.593893    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:10.603822    6239 logs.go:276] 0 containers: []
	W0906 12:31:10.603832    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:10.603887    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:10.616515    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:10.616531    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:10.616535    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:10.652583    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:10.652603    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:10.680468    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:10.680480    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:10.701724    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:10.701738    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:10.726813    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:10.726824    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:10.748255    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:10.748266    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:10.766735    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:10.766749    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:10.778365    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:10.778379    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:10.783155    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:10.783162    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:10.821707    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:10.821720    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:10.836817    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:10.836827    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:10.851168    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:10.851183    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:10.876893    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:10.876915    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:13.396473    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:12.145968    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:12.146119    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:12.159045    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:12.159122    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:12.170623    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:12.170698    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:12.181128    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:12.181196    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:12.191893    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:12.191963    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:12.202134    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:12.202207    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:12.213361    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:12.213433    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:12.223972    6165 logs.go:276] 0 containers: []
	W0906 12:31:12.223983    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:12.224043    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:12.234823    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:12.234841    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:12.234846    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:12.246287    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:12.246300    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:12.284714    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:12.284728    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:12.299868    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:12.299882    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:12.311900    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:12.311909    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:12.323675    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:12.323684    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:12.360731    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:12.360747    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:12.375371    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:12.375382    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:12.387443    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:12.387454    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:12.399251    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:12.399265    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:12.410716    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:12.410727    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:12.428252    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:12.428266    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:12.432766    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:12.432775    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:12.446652    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:12.446665    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:12.461175    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:12.461189    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:14.987470    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:18.397596    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:18.397835    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:18.422836    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:18.422951    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:18.438965    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:18.439032    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:18.452291    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:18.452358    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:18.463343    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:18.463410    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:18.473929    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:18.474002    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:18.484771    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:18.484839    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:18.494799    6239 logs.go:276] 0 containers: []
	W0906 12:31:18.494810    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:18.494866    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:18.505472    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:18.505489    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:18.505494    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:18.525221    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:18.525231    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:18.536917    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:18.536931    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:18.548484    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:18.548496    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:18.565732    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:18.565742    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:18.600883    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:18.600891    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:18.617851    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:18.617864    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:18.643777    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:18.643788    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:18.655277    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:18.655290    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:18.669384    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:18.669398    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:18.680910    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:18.680921    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:18.685192    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:18.685199    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:18.699465    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:18.699478    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:18.714406    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:18.714422    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:18.731190    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:18.731201    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:19.988222    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:19.988315    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:19.999912    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:19.999988    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:20.011268    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:20.011337    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:20.021998    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:20.022071    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:20.032425    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:20.032495    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:20.044233    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:20.044299    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:20.054419    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:20.054483    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:20.065061    6165 logs.go:276] 0 containers: []
	W0906 12:31:20.065071    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:20.065121    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:20.075488    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:20.075504    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:20.075509    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:20.108498    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:20.108510    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:20.122902    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:20.122914    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:20.134770    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:20.134781    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:20.146778    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:20.146791    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:20.158704    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:20.158714    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:20.170447    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:20.170457    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:20.207638    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:20.207650    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:20.226419    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:20.226433    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:20.242371    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:20.242384    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:20.254076    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:20.254090    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:20.272672    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:20.272685    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:20.290635    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:20.290647    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:20.305676    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:20.305688    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:20.310562    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:20.310572    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:21.268432    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:22.834591    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:26.269248    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:26.269493    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:26.287078    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:26.287171    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:26.301534    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:26.301603    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:26.312631    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:26.312702    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:26.323280    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:26.323364    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:26.334064    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:26.334131    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:26.348816    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:26.348886    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:26.358772    6239 logs.go:276] 0 containers: []
	W0906 12:31:26.358784    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:26.358836    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:26.369290    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:26.369307    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:26.369314    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:26.384482    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:26.384493    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:26.388714    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:26.388721    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:26.423521    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:26.423536    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:26.435502    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:26.435517    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:26.465616    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:26.465626    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:26.486056    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:26.486064    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:26.522319    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:26.522333    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:26.536160    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:26.536172    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:26.547767    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:26.547780    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:26.565104    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:26.565117    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:26.589965    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:26.589975    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:26.601914    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:26.601927    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:26.618710    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:26.618722    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:26.630151    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:26.630161    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:29.143973    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:27.836941    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:27.837084    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:27.849073    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:27.849153    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:27.859756    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:27.859838    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:27.870157    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:27.870246    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:27.881127    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:27.881195    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:27.891465    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:27.891528    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:27.902403    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:27.902471    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:27.912900    6165 logs.go:276] 0 containers: []
	W0906 12:31:27.912910    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:27.912968    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:27.923549    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:27.923569    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:27.923575    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:27.962977    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:27.962989    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:27.976808    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:27.976818    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:28.000638    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:28.000649    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:28.012281    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:28.012292    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:28.026884    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:28.026895    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:28.038545    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:28.038557    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:28.050746    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:28.050757    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:28.065653    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:28.065666    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:28.077575    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:28.077585    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:28.095269    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:28.095279    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:28.106996    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:28.107010    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:28.118598    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:28.118609    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:28.130285    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:28.130299    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:28.164787    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:28.164798    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:30.671569    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:34.146694    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:34.147105    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:34.176338    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:34.176465    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:34.194342    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:34.194433    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:34.207676    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:34.207753    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:34.219104    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:34.219168    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:34.229799    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:34.229865    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:34.241357    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:34.241427    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:34.252095    6239 logs.go:276] 0 containers: []
	W0906 12:31:34.252107    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:34.252169    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:34.262289    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:34.262307    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:34.262312    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:34.274788    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:34.274803    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:34.298832    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:34.298840    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:34.333811    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:34.333823    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:34.368414    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:34.368425    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:34.380424    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:34.380435    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:34.384565    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:34.384572    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:34.396306    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:34.396316    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:34.410711    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:34.410722    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:34.425868    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:34.425879    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:34.444241    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:34.444252    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:34.455759    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:34.455770    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:34.473146    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:34.473156    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:34.485184    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:34.485195    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:34.500690    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:34.500701    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:35.673803    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:35.673917    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:35.686429    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:35.686503    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:35.698003    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:35.698070    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:35.709038    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:35.709113    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:35.720173    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:35.720244    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:35.730779    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:35.730846    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:35.741453    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:35.741519    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:35.751600    6165 logs.go:276] 0 containers: []
	W0906 12:31:35.751609    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:35.751666    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:35.763871    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:35.763890    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:35.763896    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:35.799719    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:35.799730    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:35.823320    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:35.823330    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:35.835596    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:35.835610    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:35.860602    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:35.860612    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:35.865235    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:35.865243    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:35.884631    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:35.884646    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:35.901563    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:35.901576    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:35.917637    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:35.917649    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:35.932800    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:35.932811    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:35.944518    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:35.944528    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:35.978414    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:35.978427    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:35.995370    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:35.995383    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:36.009398    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:36.009410    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:36.021902    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:36.021912    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:37.014085    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:38.536821    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:42.015317    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:42.015426    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:42.026818    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:42.026895    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:42.048683    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:42.048750    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:42.060020    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:42.060098    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:42.070768    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:42.070834    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:42.081011    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:42.081082    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:42.091089    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:42.091154    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:42.101509    6239 logs.go:276] 0 containers: []
	W0906 12:31:42.101521    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:42.101579    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:42.111593    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:42.111610    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:42.111615    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:42.129091    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:42.129101    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:42.152662    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:42.152669    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:42.188088    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:42.188098    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:42.192267    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:42.192275    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:42.204116    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:42.204127    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:42.225667    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:42.225678    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:42.237824    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:42.237834    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:42.280502    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:42.280518    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:42.295195    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:42.295208    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:42.311241    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:42.311252    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:42.325782    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:42.325793    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:42.337388    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:42.337399    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:42.349969    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:42.349981    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:42.361668    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:42.361679    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:43.539032    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:43.539224    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:43.558109    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:43.558205    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:43.572653    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:43.572730    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:43.585139    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:43.585211    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:43.596961    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:43.597030    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:43.607432    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:43.607502    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:43.618467    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:43.618545    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:43.629181    6165 logs.go:276] 0 containers: []
	W0906 12:31:43.629192    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:43.629251    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:43.639979    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:43.639996    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:43.640001    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:43.655411    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:43.655421    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:43.690119    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:43.690130    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:43.705515    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:43.705525    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:43.716841    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:43.716851    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:43.738248    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:43.738259    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:43.743471    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:43.743479    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:43.758094    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:43.758106    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:43.770249    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:43.770261    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:43.781905    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:43.781919    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:43.805934    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:43.805945    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:43.838928    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:43.838937    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:43.852866    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:43.852880    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:43.864858    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:43.864870    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:43.876465    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:43.876475    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:46.390496    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:44.875723    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:51.392280    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:51.392404    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:51.404879    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:51.404944    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:51.415712    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:51.415782    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:51.426544    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:51.426609    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:51.437288    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:51.437358    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:51.455903    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:51.455975    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:51.467173    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:51.467237    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:51.476966    6165 logs.go:276] 0 containers: []
	W0906 12:31:51.476977    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:51.477034    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:51.487730    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:51.487748    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:51.487754    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:51.492054    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:51.492064    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:51.511250    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:51.511262    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:51.523288    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:51.523301    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:51.534882    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:51.534895    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:51.547405    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:51.547416    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:51.562875    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:51.562884    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:51.574471    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:51.574482    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:51.599465    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:51.599476    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:51.634269    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:51.634282    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:51.670078    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:51.670091    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:31:51.681747    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:51.681761    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:51.694087    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:51.694098    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:51.711465    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:51.711478    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:51.730361    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:51.730371    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:49.877244    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:49.877410    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:49.889284    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:49.889361    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:49.899553    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:49.899621    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:49.910013    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:49.910089    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:49.920259    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:49.920333    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:49.930595    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:49.930664    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:49.944871    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:49.944940    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:49.958932    6239 logs.go:276] 0 containers: []
	W0906 12:31:49.958944    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:49.958996    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:49.969704    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:49.969723    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:49.969728    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:50.004615    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:50.004626    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:50.038398    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:50.038414    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:50.052201    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:50.052211    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:50.065492    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:50.065504    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:50.083511    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:50.083521    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:50.087932    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:50.087944    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:50.105047    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:50.105059    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:50.128897    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:50.128905    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:50.140315    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:50.140328    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:50.155319    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:50.155329    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:50.167716    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:50.167726    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:50.184779    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:50.184789    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:50.196476    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:50.196487    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:50.208427    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:50.208439    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:52.722375    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:54.245534    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:57.724687    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:57.724851    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:57.736597    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:57.736669    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:57.747656    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:57.747722    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:57.758706    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:57.758779    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:57.769080    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:57.769144    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:57.786277    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:57.786352    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:57.796683    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:57.796749    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:57.806581    6239 logs.go:276] 0 containers: []
	W0906 12:31:57.806595    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:57.806650    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:57.817430    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:57.817451    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:57.817456    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:57.832781    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:57.832794    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:57.857535    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:57.857549    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:57.869482    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:57.869492    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:57.880861    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:57.880875    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:57.895168    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:57.895181    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:57.911692    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:57.911703    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:57.926037    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:57.926049    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:57.943797    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:57.943808    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:57.948006    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:57.948013    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:57.964001    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:57.964014    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:57.975773    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:57.975784    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:57.988049    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:57.988060    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:57.999577    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:57.999590    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:58.035374    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:58.035383    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:59.247469    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:59.247629    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:59.258410    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:31:59.258472    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:59.269422    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:31:59.269493    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:59.280817    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:31:59.280885    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:59.295785    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:31:59.295847    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:59.307557    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:31:59.307623    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:59.318286    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:31:59.318355    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:59.329313    6165 logs.go:276] 0 containers: []
	W0906 12:31:59.329323    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:59.329379    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:59.343905    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:31:59.343921    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:31:59.343926    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:31:59.355621    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:31:59.355631    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:59.367377    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:31:59.367388    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:31:59.379377    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:31:59.379387    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:31:59.397347    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:59.397358    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:59.421828    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:31:59.421838    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:31:59.433158    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:59.433168    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:59.468456    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:31:59.468464    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:31:59.482927    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:31:59.482936    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:31:59.495277    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:31:59.495291    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:31:59.510212    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:31:59.510222    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:31:59.522316    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:59.522326    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:59.527058    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:59.527067    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:59.561030    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:31:59.561041    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:31:59.575750    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:31:59.575760    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:32:02.089295    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:00.573941    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:07.091567    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:07.091699    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:05.576212    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:05.576396    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:05.589170    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:32:05.589250    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:05.599951    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:32:05.600019    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:05.612477    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:32:05.612554    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:05.623441    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:32:05.623513    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:05.634155    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:32:05.634220    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:05.648400    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:32:05.648474    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:05.664889    6239 logs.go:276] 0 containers: []
	W0906 12:32:05.664903    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:05.664963    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:05.675877    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:32:05.675897    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:05.675902    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:05.710782    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:32:05.710793    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:32:05.722252    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:32:05.722265    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:32:05.737034    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:32:05.737045    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:32:05.749334    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:32:05.749349    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:32:05.761084    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:32:05.761095    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:32:05.778800    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:32:05.778811    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:32:05.792880    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:32:05.792891    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:32:05.808090    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:32:05.808102    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:32:05.825474    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:05.825489    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:05.830240    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:05.830248    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:05.867128    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:32:05.867141    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:32:05.879318    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:32:05.879330    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:32:05.890790    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:05.890801    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:05.916213    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:32:05.916227    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:08.430059    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:07.106304    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:32:07.106376    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:07.120025    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:32:07.120091    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:07.131078    6165 logs.go:276] 4 containers: [b8d56638d69b c5f07fc47b7b c714dbf82d9d d71240f41a38]
	I0906 12:32:07.131150    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:07.144419    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:32:07.144478    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:07.155245    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:32:07.155310    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:07.165912    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:32:07.165986    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:07.177815    6165 logs.go:276] 0 containers: []
	W0906 12:32:07.177826    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:07.177891    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:07.189102    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:32:07.189118    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:32:07.189123    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:32:07.200346    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:32:07.200361    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:32:07.215768    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:32:07.215780    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:32:07.231115    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:32:07.231128    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:32:07.248753    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:07.248768    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:07.272765    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:32:07.272776    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:07.284879    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:07.284894    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:07.319930    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:32:07.319941    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:32:07.334560    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:32:07.334574    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:32:07.347070    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:07.347081    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:07.351453    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:32:07.351463    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:32:07.365884    6165 logs.go:123] Gathering logs for coredns [d71240f41a38] ...
	I0906 12:32:07.365894    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71240f41a38"
	I0906 12:32:07.378146    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:32:07.378159    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:32:07.389878    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:07.389893    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:07.423716    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:32:07.423727    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:32:09.937341    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:13.431182    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:13.431321    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:13.442598    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:32:13.442669    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:13.454579    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:32:13.454648    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:13.465435    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:32:13.465507    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:13.476019    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:32:13.476087    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:13.486633    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:32:13.486698    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:13.497287    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:32:13.497353    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:13.507424    6239 logs.go:276] 0 containers: []
	W0906 12:32:13.507436    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:13.507497    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:13.517083    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:32:13.517099    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:13.517104    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:13.552227    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:32:13.552238    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:32:13.570624    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:32:13.570635    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:32:13.582507    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:13.582517    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:13.606549    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:32:13.606561    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:32:13.623600    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:32:13.623614    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:32:13.635122    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:32:13.635134    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:32:13.649360    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:32:13.649370    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:32:13.661446    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:32:13.661458    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:32:13.679235    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:13.679245    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:13.683713    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:32:13.683719    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:32:13.708919    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:32:13.708932    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:32:13.722124    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:32:13.722138    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:32:13.736900    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:13.736912    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:13.778275    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:32:13.778289    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:14.939591    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:14.939710    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:14.952281    6165 logs.go:276] 1 containers: [10ecc787d12c]
	I0906 12:32:14.952351    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:14.963524    6165 logs.go:276] 1 containers: [6e623b524c4b]
	I0906 12:32:14.963589    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:14.974335    6165 logs.go:276] 4 containers: [ce344e93b0f6 b8d56638d69b c5f07fc47b7b c714dbf82d9d]
	I0906 12:32:14.974402    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:14.985314    6165 logs.go:276] 1 containers: [f50ee82cdb86]
	I0906 12:32:14.985381    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:14.995775    6165 logs.go:276] 1 containers: [b5cdbbdb139d]
	I0906 12:32:14.995840    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:15.005809    6165 logs.go:276] 1 containers: [ea805958957a]
	I0906 12:32:15.005865    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:15.016760    6165 logs.go:276] 0 containers: []
	W0906 12:32:15.016770    6165 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:15.016827    6165 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:15.027234    6165 logs.go:276] 1 containers: [e27a17721598]
	I0906 12:32:15.027252    6165 logs.go:123] Gathering logs for kube-proxy [b5cdbbdb139d] ...
	I0906 12:32:15.027257    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5cdbbdb139d"
	I0906 12:32:15.044251    6165 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:15.044261    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:15.068327    6165 logs.go:123] Gathering logs for container status ...
	I0906 12:32:15.068335    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:15.079557    6165 logs.go:123] Gathering logs for kube-apiserver [10ecc787d12c] ...
	I0906 12:32:15.079567    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10ecc787d12c"
	I0906 12:32:15.093557    6165 logs.go:123] Gathering logs for kube-scheduler [f50ee82cdb86] ...
	I0906 12:32:15.093568    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f50ee82cdb86"
	I0906 12:32:15.108242    6165 logs.go:123] Gathering logs for etcd [6e623b524c4b] ...
	I0906 12:32:15.108253    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e623b524c4b"
	I0906 12:32:15.122408    6165 logs.go:123] Gathering logs for coredns [ce344e93b0f6] ...
	I0906 12:32:15.122421    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce344e93b0f6"
	I0906 12:32:15.136519    6165 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:15.136531    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:15.171182    6165 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:15.171197    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:15.204788    6165 logs.go:123] Gathering logs for storage-provisioner [e27a17721598] ...
	I0906 12:32:15.204804    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e27a17721598"
	I0906 12:32:15.216914    6165 logs.go:123] Gathering logs for coredns [c5f07fc47b7b] ...
	I0906 12:32:15.216929    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5f07fc47b7b"
	I0906 12:32:15.230303    6165 logs.go:123] Gathering logs for coredns [c714dbf82d9d] ...
	I0906 12:32:15.230320    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c714dbf82d9d"
	I0906 12:32:15.253067    6165 logs.go:123] Gathering logs for kube-controller-manager [ea805958957a] ...
	I0906 12:32:15.253081    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea805958957a"
	I0906 12:32:15.273695    6165 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:15.273709    6165 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:15.278870    6165 logs.go:123] Gathering logs for coredns [b8d56638d69b] ...
	I0906 12:32:15.278880    6165 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8d56638d69b"
	I0906 12:32:16.292199    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:17.792280    6165 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:22.794443    6165 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:22.798893    6165 out.go:201] 
	W0906 12:32:22.801795    6165 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0906 12:32:22.801800    6165 out.go:270] * 
	W0906 12:32:22.802196    6165 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:32:22.809594    6165 out.go:201] 
	I0906 12:32:21.293189    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:21.293408    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:21.310334    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:32:21.310436    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:21.323586    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:32:21.323656    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:21.340839    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:32:21.340908    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:21.351515    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:32:21.351594    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:21.361701    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:32:21.361787    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:21.372615    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:32:21.372701    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:21.383522    6239 logs.go:276] 0 containers: []
	W0906 12:32:21.383536    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:21.383603    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:21.394537    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:32:21.394556    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:21.394561    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:21.399286    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:21.399294    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:21.433005    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:32:21.433015    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:32:21.451792    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:32:21.451802    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:32:21.463635    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:32:21.463646    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:32:21.475392    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:21.475402    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:21.508637    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:21.508647    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:21.531905    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:32:21.531917    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:32:21.546887    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:32:21.546897    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:32:21.558067    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:32:21.558078    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:32:21.570378    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:32:21.570390    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:32:21.584601    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:32:21.584615    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:21.596296    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:32:21.596310    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:32:21.610807    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:32:21.610820    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:32:21.628817    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:32:21.628828    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:32:24.148956    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:29.150881    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:29.151112    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:29.174212    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:32:29.174310    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:29.190147    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:32:29.190221    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:29.202662    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:32:29.202731    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:29.213228    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:32:29.213293    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:29.224271    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:32:29.224340    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:29.235002    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:32:29.235069    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:29.244801    6239 logs.go:276] 0 containers: []
	W0906 12:32:29.244812    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:29.244863    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:29.255564    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:32:29.255581    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:32:29.255586    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:32:29.267591    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:32:29.267601    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:32:29.281696    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:32:29.281713    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:32:29.294386    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:32:29.294397    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:32:29.307449    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:32:29.307463    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:29.319203    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:29.319212    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:29.354938    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:29.354948    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:29.390601    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:32:29.390613    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:32:29.404963    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:32:29.404974    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:32:29.418380    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:32:29.418391    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:32:29.429877    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:32:29.429888    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:32:29.444250    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:29.444263    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:29.467193    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:29.467199    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:29.471607    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:32:29.471616    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:32:29.483541    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:32:29.483556    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:32:32.001483    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-09-06 19:23:21 UTC, ends at Fri 2024-09-06 19:32:38 UTC. --
	Sep 06 19:32:24 running-upgrade-549000 dockerd[3139]: time="2024-09-06T19:32:24.053232000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:32:24 running-upgrade-549000 dockerd[3139]: time="2024-09-06T19:32:24.053270541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:32:24 running-upgrade-549000 dockerd[3139]: time="2024-09-06T19:32:24.053276416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:32:24 running-upgrade-549000 dockerd[3139]: time="2024-09-06T19:32:24.053440371Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fddda2273837e67155ebabf725fa8aee3506ab9bf74b545de0fa9c4671d207da pid=18604 runtime=io.containerd.runc.v2
	Sep 06 19:32:24 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:24Z" level=error msg="ContainerStats resp: {0x400091a4c0 linux}"
	Sep 06 19:32:25 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:25Z" level=error msg="ContainerStats resp: {0x400091bec0 linux}"
	Sep 06 19:32:25 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:25Z" level=error msg="ContainerStats resp: {0x40000b8540 linux}"
	Sep 06 19:32:25 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:25Z" level=error msg="ContainerStats resp: {0x40000b9240 linux}"
	Sep 06 19:32:25 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:25Z" level=error msg="ContainerStats resp: {0x40000b9500 linux}"
	Sep 06 19:32:25 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:25Z" level=error msg="ContainerStats resp: {0x40004d1c00 linux}"
	Sep 06 19:32:25 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:25Z" level=error msg="ContainerStats resp: {0x40007b64c0 linux}"
	Sep 06 19:32:25 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:25Z" level=error msg="ContainerStats resp: {0x40007b6c00 linux}"
	Sep 06 19:32:26 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:26Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 06 19:32:31 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:31Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 06 19:32:35 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:35Z" level=error msg="ContainerStats resp: {0x40007b7c40 linux}"
	Sep 06 19:32:35 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:35Z" level=error msg="ContainerStats resp: {0x40007b7d80 linux}"
	Sep 06 19:32:36 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:36Z" level=error msg="ContainerStats resp: {0x40004d0a40 linux}"
	Sep 06 19:32:36 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:36Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 06 19:32:37 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:37Z" level=error msg="ContainerStats resp: {0x40004d1f00 linux}"
	Sep 06 19:32:37 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:37Z" level=error msg="ContainerStats resp: {0x400035a8c0 linux}"
	Sep 06 19:32:37 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:37Z" level=error msg="ContainerStats resp: {0x400035ac80 linux}"
	Sep 06 19:32:37 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:37Z" level=error msg="ContainerStats resp: {0x40005bed00 linux}"
	Sep 06 19:32:37 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:37Z" level=error msg="ContainerStats resp: {0x40005bee80 linux}"
	Sep 06 19:32:37 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:37Z" level=error msg="ContainerStats resp: {0x40005bfac0 linux}"
	Sep 06 19:32:37 running-upgrade-549000 cri-dockerd[2980]: time="2024-09-06T19:32:37Z" level=error msg="ContainerStats resp: {0x4000a5eb40 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	fddda2273837e       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   2927bbe5f0d44
	ce344e93b0f64       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   972f288eccf16
	b8d56638d69bc       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   2927bbe5f0d44
	c5f07fc47b7b8       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   972f288eccf16
	b5cdbbdb139df       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   271a3287a378b
	e27a177215982       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   fc6a8fc62bc09
	f50ee82cdb869       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   95bb6592ef64c
	6e623b524c4bc       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   075d79bbf689f
	ea805958957a7       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   af696c6e767e7
	10ecc787d12cc       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   096e75e9bbed9
	
	
	==> coredns [b8d56638d69b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6408798421317284454.2141681611794983903. HINFO: read udp 10.244.0.3:39895->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6408798421317284454.2141681611794983903. HINFO: read udp 10.244.0.3:34339->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6408798421317284454.2141681611794983903. HINFO: read udp 10.244.0.3:56926->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6408798421317284454.2141681611794983903. HINFO: read udp 10.244.0.3:33488->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6408798421317284454.2141681611794983903. HINFO: read udp 10.244.0.3:58103->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6408798421317284454.2141681611794983903. HINFO: read udp 10.244.0.3:55100->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6408798421317284454.2141681611794983903. HINFO: read udp 10.244.0.3:57771->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6408798421317284454.2141681611794983903. HINFO: read udp 10.244.0.3:41802->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c5f07fc47b7b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8493820173087958519.4603412873899390725. HINFO: read udp 10.244.0.2:54644->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8493820173087958519.4603412873899390725. HINFO: read udp 10.244.0.2:53568->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8493820173087958519.4603412873899390725. HINFO: read udp 10.244.0.2:34240->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8493820173087958519.4603412873899390725. HINFO: read udp 10.244.0.2:47358->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8493820173087958519.4603412873899390725. HINFO: read udp 10.244.0.2:34684->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8493820173087958519.4603412873899390725. HINFO: read udp 10.244.0.2:39652->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8493820173087958519.4603412873899390725. HINFO: read udp 10.244.0.2:60559->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8493820173087958519.4603412873899390725. HINFO: read udp 10.244.0.2:52284->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8493820173087958519.4603412873899390725. HINFO: read udp 10.244.0.2:57068->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8493820173087958519.4603412873899390725. HINFO: read udp 10.244.0.2:34424->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ce344e93b0f6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4153633755824028483.3551413058141012871. HINFO: read udp 10.244.0.2:59586->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4153633755824028483.3551413058141012871. HINFO: read udp 10.244.0.2:51846->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4153633755824028483.3551413058141012871. HINFO: read udp 10.244.0.2:37832->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4153633755824028483.3551413058141012871. HINFO: read udp 10.244.0.2:53692->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4153633755824028483.3551413058141012871. HINFO: read udp 10.244.0.2:52270->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4153633755824028483.3551413058141012871. HINFO: read udp 10.244.0.2:56762->10.0.2.3:53: i/o timeout
	
	
	==> coredns [fddda2273837] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2288164152425583734.5990906329788100414. HINFO: read udp 10.244.0.3:42729->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2288164152425583734.5990906329788100414. HINFO: read udp 10.244.0.3:42425->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2288164152425583734.5990906329788100414. HINFO: read udp 10.244.0.3:39079->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-549000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-549000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=running-upgrade-549000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T12_28_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:28:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-549000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:32:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:28:22 +0000   Fri, 06 Sep 2024 19:28:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:28:22 +0000   Fri, 06 Sep 2024 19:28:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:28:22 +0000   Fri, 06 Sep 2024 19:28:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:28:22 +0000   Fri, 06 Sep 2024 19:28:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-549000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1090e85d0284802bcb04a606286d973
	  System UUID:                a1090e85d0284802bcb04a606286d973
	  Boot ID:                    6506dd67-d97c-4edd-a820-96ed2a06ffee
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-gd5t2                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 coredns-6d4b75cb6d-xfkh4                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-549000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-549000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-549000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-mhmq8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-549000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-549000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-549000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-549000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-549000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-549000 event: Registered Node running-upgrade-549000 in Controller
	
	
	==> dmesg <==
	[  +1.687803] systemd-fstab-generator[873]: Ignoring "noauto" for root device
	[  +0.059256] systemd-fstab-generator[884]: Ignoring "noauto" for root device
	[  +0.060446] systemd-fstab-generator[895]: Ignoring "noauto" for root device
	[  +1.136955] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.088976] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +0.065179] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[  +2.871210] systemd-fstab-generator[1283]: Ignoring "noauto" for root device
	[  +7.132750] systemd-fstab-generator[1807]: Ignoring "noauto" for root device
	[  +6.955783] systemd-fstab-generator[2174]: Ignoring "noauto" for root device
	[  +0.166333] systemd-fstab-generator[2209]: Ignoring "noauto" for root device
	[  +0.099546] systemd-fstab-generator[2220]: Ignoring "noauto" for root device
	[  +0.105547] systemd-fstab-generator[2233]: Ignoring "noauto" for root device
	[Sep 6 19:24] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.193747] systemd-fstab-generator[2937]: Ignoring "noauto" for root device
	[  +0.084579] systemd-fstab-generator[2948]: Ignoring "noauto" for root device
	[  +0.070470] systemd-fstab-generator[2959]: Ignoring "noauto" for root device
	[  +0.082546] systemd-fstab-generator[2973]: Ignoring "noauto" for root device
	[  +2.420194] systemd-fstab-generator[3125]: Ignoring "noauto" for root device
	[  +2.804611] systemd-fstab-generator[3514]: Ignoring "noauto" for root device
	[  +1.142261] systemd-fstab-generator[3796]: Ignoring "noauto" for root device
	[ +20.960530] kauditd_printk_skb: 68 callbacks suppressed
	[Sep 6 19:28] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.146795] systemd-fstab-generator[11626]: Ignoring "noauto" for root device
	[  +5.150226] systemd-fstab-generator[12220]: Ignoring "noauto" for root device
	[  +0.448369] systemd-fstab-generator[12352]: Ignoring "noauto" for root device
	
	
	==> etcd [6e623b524c4b] <==
	{"level":"info","ts":"2024-09-06T19:28:17.953Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T19:28:17.953Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-06T19:28:17.953Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-06T19:28:17.953Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T19:28:17.953Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-06T19:28:17.953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-06T19:28:17.953Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-06T19:28:18.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-06T19:28:18.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-06T19:28:18.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-06T19:28:18.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:28:18.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-06T19:28:18.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-06T19:28:18.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-06T19:28:18.446Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-549000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:28:18.447Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:28:18.449Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:28:18.449Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:28:18.449Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-06T19:28:18.449Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:28:18.449Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T19:28:18.466Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:28:18.466Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:28:18.467Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:28:18.467Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:32:39 up 9 min,  0 users,  load average: 0.39, 0.22, 0.09
	Linux running-upgrade-549000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [10ecc787d12c] <==
	I0906 19:28:19.741562       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0906 19:28:19.741747       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:28:19.741899       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0906 19:28:19.741935       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:28:19.741960       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 19:28:19.742662       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0906 19:28:19.773733       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0906 19:28:20.475529       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 19:28:20.645645       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0906 19:28:20.649293       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0906 19:28:20.649322       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 19:28:20.784028       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:28:20.793526       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 19:28:20.808989       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0906 19:28:20.811757       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0906 19:28:20.812162       1 controller.go:611] quota admission added evaluator for: endpoints
	I0906 19:28:20.813488       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 19:28:21.784428       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0906 19:28:22.088751       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0906 19:28:22.092732       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0906 19:28:22.097517       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0906 19:28:22.146770       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:28:35.854999       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0906 19:28:35.954378       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0906 19:28:36.488046       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [ea805958957a] <==
	I0906 19:28:35.111683       1 shared_informer.go:262] Caches are synced for taint
	I0906 19:28:35.111754       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0906 19:28:35.112025       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-549000. Assuming now as a timestamp.
	I0906 19:28:35.112069       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0906 19:28:35.111922       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0906 19:28:35.111951       1 event.go:294] "Event occurred" object="running-upgrade-549000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-549000 event: Registered Node running-upgrade-549000 in Controller"
	I0906 19:28:35.113728       1 range_allocator.go:374] Set node running-upgrade-549000 PodCIDR to [10.244.0.0/24]
	I0906 19:28:35.185761       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0906 19:28:35.189994       1 shared_informer.go:262] Caches are synced for daemon sets
	I0906 19:28:35.196204       1 shared_informer.go:262] Caches are synced for TTL
	I0906 19:28:35.202568       1 shared_informer.go:262] Caches are synced for persistent volume
	I0906 19:28:35.202713       1 shared_informer.go:262] Caches are synced for GC
	I0906 19:28:35.216028       1 shared_informer.go:262] Caches are synced for attach detach
	I0906 19:28:35.253384       1 shared_informer.go:262] Caches are synced for disruption
	I0906 19:28:35.253392       1 disruption.go:371] Sending events to api server.
	I0906 19:28:35.254484       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0906 19:28:35.306138       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 19:28:35.309158       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 19:28:35.724200       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 19:28:35.752213       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 19:28:35.752313       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 19:28:35.856456       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0906 19:28:35.957793       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mhmq8"
	I0906 19:28:36.105046       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-gd5t2"
	I0906 19:28:36.115036       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-xfkh4"
	
	
	==> kube-proxy [b5cdbbdb139d] <==
	I0906 19:28:36.458378       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0906 19:28:36.458422       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0906 19:28:36.458448       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 19:28:36.486428       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0906 19:28:36.486441       1 server_others.go:206] "Using iptables Proxier"
	I0906 19:28:36.486453       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 19:28:36.486655       1 server.go:661] "Version info" version="v1.24.1"
	I0906 19:28:36.486659       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:28:36.486894       1 config.go:317] "Starting service config controller"
	I0906 19:28:36.486900       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 19:28:36.486910       1 config.go:226] "Starting endpoint slice config controller"
	I0906 19:28:36.486911       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 19:28:36.487126       1 config.go:444] "Starting node config controller"
	I0906 19:28:36.487128       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 19:28:36.587980       1 shared_informer.go:262] Caches are synced for service config
	I0906 19:28:36.587990       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 19:28:36.587983       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [f50ee82cdb86] <==
	W0906 19:28:19.698822       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 19:28:19.699296       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0906 19:28:19.698839       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 19:28:19.699331       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0906 19:28:19.698855       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 19:28:19.699375       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 19:28:19.698866       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 19:28:19.699406       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0906 19:28:19.698876       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 19:28:19.699449       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 19:28:19.698905       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 19:28:19.699480       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 19:28:19.698916       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 19:28:19.699526       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0906 19:28:19.698926       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 19:28:19.699563       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0906 19:28:20.540355       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 19:28:20.540395       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0906 19:28:20.671780       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 19:28:20.671972       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0906 19:28:20.675682       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 19:28:20.675770       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 19:28:20.701798       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 19:28:20.701888       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0906 19:28:20.994398       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-09-06 19:23:21 UTC, ends at Fri 2024-09-06 19:32:39 UTC. --
	Sep 06 19:28:23 running-upgrade-549000 kubelet[12226]: E0906 19:28:23.919056   12226 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-549000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-549000"
	Sep 06 19:28:24 running-upgrade-549000 kubelet[12226]: E0906 19:28:24.124297   12226 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-549000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-549000"
	Sep 06 19:28:24 running-upgrade-549000 kubelet[12226]: I0906 19:28:24.318940   12226 request.go:601] Waited for 1.117554332s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 06 19:28:24 running-upgrade-549000 kubelet[12226]: E0906 19:28:24.323238   12226 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-549000\" already exists" pod="kube-system/etcd-running-upgrade-549000"
	Sep 06 19:28:35 running-upgrade-549000 kubelet[12226]: I0906 19:28:35.117789   12226 topology_manager.go:200] "Topology Admit Handler"
	Sep 06 19:28:35 running-upgrade-549000 kubelet[12226]: I0906 19:28:35.202158   12226 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 06 19:28:35 running-upgrade-549000 kubelet[12226]: I0906 19:28:35.202352   12226 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9b505209-33ad-434b-9562-5f2da5c6ac09-tmp\") pod \"storage-provisioner\" (UID: \"9b505209-33ad-434b-9562-5f2da5c6ac09\") " pod="kube-system/storage-provisioner"
	Sep 06 19:28:35 running-upgrade-549000 kubelet[12226]: I0906 19:28:35.202379   12226 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r6pm\" (UniqueName: \"kubernetes.io/projected/9b505209-33ad-434b-9562-5f2da5c6ac09-kube-api-access-7r6pm\") pod \"storage-provisioner\" (UID: \"9b505209-33ad-434b-9562-5f2da5c6ac09\") " pod="kube-system/storage-provisioner"
	Sep 06 19:28:35 running-upgrade-549000 kubelet[12226]: I0906 19:28:35.202490   12226 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 06 19:28:35 running-upgrade-549000 kubelet[12226]: E0906 19:28:35.305078   12226 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 06 19:28:35 running-upgrade-549000 kubelet[12226]: E0906 19:28:35.305095   12226 projected.go:192] Error preparing data for projected volume kube-api-access-7r6pm for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 06 19:28:35 running-upgrade-549000 kubelet[12226]: E0906 19:28:35.305128   12226 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/9b505209-33ad-434b-9562-5f2da5c6ac09-kube-api-access-7r6pm podName:9b505209-33ad-434b-9562-5f2da5c6ac09 nodeName:}" failed. No retries permitted until 2024-09-06 19:28:35.805115426 +0000 UTC m=+13.728339901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7r6pm" (UniqueName: "kubernetes.io/projected/9b505209-33ad-434b-9562-5f2da5c6ac09-kube-api-access-7r6pm") pod "storage-provisioner" (UID: "9b505209-33ad-434b-9562-5f2da5c6ac09") : configmap "kube-root-ca.crt" not found
	Sep 06 19:28:35 running-upgrade-549000 kubelet[12226]: I0906 19:28:35.961128   12226 topology_manager.go:200] "Topology Admit Handler"
	Sep 06 19:28:36 running-upgrade-549000 kubelet[12226]: I0906 19:28:36.108396   12226 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d54f8cb8-31f6-427c-b954-e15ffb52e953-xtables-lock\") pod \"kube-proxy-mhmq8\" (UID: \"d54f8cb8-31f6-427c-b954-e15ffb52e953\") " pod="kube-system/kube-proxy-mhmq8"
	Sep 06 19:28:36 running-upgrade-549000 kubelet[12226]: I0906 19:28:36.108434   12226 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d54f8cb8-31f6-427c-b954-e15ffb52e953-kube-proxy\") pod \"kube-proxy-mhmq8\" (UID: \"d54f8cb8-31f6-427c-b954-e15ffb52e953\") " pod="kube-system/kube-proxy-mhmq8"
	Sep 06 19:28:36 running-upgrade-549000 kubelet[12226]: I0906 19:28:36.108445   12226 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d54f8cb8-31f6-427c-b954-e15ffb52e953-lib-modules\") pod \"kube-proxy-mhmq8\" (UID: \"d54f8cb8-31f6-427c-b954-e15ffb52e953\") " pod="kube-system/kube-proxy-mhmq8"
	Sep 06 19:28:36 running-upgrade-549000 kubelet[12226]: I0906 19:28:36.108458   12226 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fndfh\" (UniqueName: \"kubernetes.io/projected/d54f8cb8-31f6-427c-b954-e15ffb52e953-kube-api-access-fndfh\") pod \"kube-proxy-mhmq8\" (UID: \"d54f8cb8-31f6-427c-b954-e15ffb52e953\") " pod="kube-system/kube-proxy-mhmq8"
	Sep 06 19:28:36 running-upgrade-549000 kubelet[12226]: I0906 19:28:36.111560   12226 topology_manager.go:200] "Topology Admit Handler"
	Sep 06 19:28:36 running-upgrade-549000 kubelet[12226]: I0906 19:28:36.118302   12226 topology_manager.go:200] "Topology Admit Handler"
	Sep 06 19:28:36 running-upgrade-549000 kubelet[12226]: I0906 19:28:36.209335   12226 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9f52ad7-69d5-4b52-9358-62a945014b98-config-volume\") pod \"coredns-6d4b75cb6d-gd5t2\" (UID: \"b9f52ad7-69d5-4b52-9358-62a945014b98\") " pod="kube-system/coredns-6d4b75cb6d-gd5t2"
	Sep 06 19:28:36 running-upgrade-549000 kubelet[12226]: I0906 19:28:36.209381   12226 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpgl\" (UniqueName: \"kubernetes.io/projected/b9f52ad7-69d5-4b52-9358-62a945014b98-kube-api-access-6xpgl\") pod \"coredns-6d4b75cb6d-gd5t2\" (UID: \"b9f52ad7-69d5-4b52-9358-62a945014b98\") " pod="kube-system/coredns-6d4b75cb6d-gd5t2"
	Sep 06 19:28:36 running-upgrade-549000 kubelet[12226]: I0906 19:28:36.209402   12226 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1cfb315-6b4c-4a71-84ce-d9abc57ba10d-config-volume\") pod \"coredns-6d4b75cb6d-xfkh4\" (UID: \"e1cfb315-6b4c-4a71-84ce-d9abc57ba10d\") " pod="kube-system/coredns-6d4b75cb6d-xfkh4"
	Sep 06 19:28:36 running-upgrade-549000 kubelet[12226]: I0906 19:28:36.209413   12226 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzhjd\" (UniqueName: \"kubernetes.io/projected/e1cfb315-6b4c-4a71-84ce-d9abc57ba10d-kube-api-access-mzhjd\") pod \"coredns-6d4b75cb6d-xfkh4\" (UID: \"e1cfb315-6b4c-4a71-84ce-d9abc57ba10d\") " pod="kube-system/coredns-6d4b75cb6d-xfkh4"
	Sep 06 19:32:14 running-upgrade-549000 kubelet[12226]: I0906 19:32:14.300371   12226 scope.go:110] "RemoveContainer" containerID="d71240f41a383ed912877c041587064ebfecfa5a089a79ff16c03e3b4bee31f5"
	Sep 06 19:32:24 running-upgrade-549000 kubelet[12226]: I0906 19:32:24.344123   12226 scope.go:110] "RemoveContainer" containerID="c714dbf82d9d60e5ea6065e4cfddc12cfb9a451288ec1e39d0d5d5e0bbcd4360"
	
	
	==> storage-provisioner [e27a17721598] <==
	I0906 19:28:36.231125       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 19:28:36.235812       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 19:28:36.235872       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 19:28:36.238619       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 19:28:36.238754       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-549000_7da4e5f3-f7af-4b64-be4d-44f5dab91e7e!
	I0906 19:28:36.238638       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e670e486-7eb7-45f0-a221-a6d68672f498", APIVersion:"v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-549000_7da4e5f3-f7af-4b64-be4d-44f5dab91e7e became leader
	I0906 19:28:36.339728       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-549000_7da4e5f3-f7af-4b64-be4d-44f5dab91e7e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-549000 -n running-upgrade-549000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-549000 -n running-upgrade-549000: exit status 2 (15.764659167s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-549000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-549000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-549000
--- FAIL: TestRunningBinaryUpgrade (601.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.228294625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-140000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-140000" primary control-plane node in "kubernetes-upgrade-140000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-140000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:22:36.586475    6027 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:22:36.586723    6027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:36.586729    6027 out.go:358] Setting ErrFile to fd 2...
	I0906 12:22:36.586732    6027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:36.586921    6027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:22:36.588295    6027 out.go:352] Setting JSON to false
	I0906 12:22:36.604840    6027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4926,"bootTime":1725645630,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:22:36.604911    6027 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:22:36.609176    6027 out.go:177] * [kubernetes-upgrade-140000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:22:36.616106    6027 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:22:36.616146    6027 notify.go:220] Checking for updates...
	I0906 12:22:36.623055    6027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:22:36.626112    6027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:22:36.629184    6027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:22:36.632138    6027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:22:36.635108    6027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:22:36.638484    6027 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:22:36.638552    6027 config.go:182] Loaded profile config "offline-docker-868000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:22:36.638609    6027 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:22:36.643018    6027 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:22:36.650102    6027 start.go:297] selected driver: qemu2
	I0906 12:22:36.650112    6027 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:22:36.650118    6027 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:22:36.652316    6027 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:22:36.655119    6027 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:22:36.658276    6027 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 12:22:36.658310    6027 cni.go:84] Creating CNI manager for ""
	I0906 12:22:36.658320    6027 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 12:22:36.658358    6027 start.go:340] cluster config:
	{Name:kubernetes-upgrade-140000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:22:36.662153    6027 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:36.670071    6027 out.go:177] * Starting "kubernetes-upgrade-140000" primary control-plane node in "kubernetes-upgrade-140000" cluster
	I0906 12:22:36.674108    6027 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 12:22:36.674126    6027 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0906 12:22:36.674141    6027 cache.go:56] Caching tarball of preloaded images
	I0906 12:22:36.674214    6027 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:22:36.674221    6027 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0906 12:22:36.674283    6027 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/kubernetes-upgrade-140000/config.json ...
	I0906 12:22:36.674300    6027 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/kubernetes-upgrade-140000/config.json: {Name:mk318c69584390e261a61bc43eb001341b501e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:22:36.674691    6027 start.go:360] acquireMachinesLock for kubernetes-upgrade-140000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:36.746192    6027 start.go:364] duration metric: took 71.487458ms to acquireMachinesLock for "kubernetes-upgrade-140000"
	I0906 12:22:36.746237    6027 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernet
es-upgrade-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:22:36.746298    6027 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:22:36.753504    6027 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:22:36.780386    6027 start.go:159] libmachine.API.Create for "kubernetes-upgrade-140000" (driver="qemu2")
	I0906 12:22:36.780421    6027 client.go:168] LocalClient.Create starting
	I0906 12:22:36.780501    6027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:22:36.780543    6027 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:36.780557    6027 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:36.780608    6027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:22:36.780642    6027 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:36.780656    6027 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:36.782849    6027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:22:36.965706    6027 main.go:141] libmachine: Creating SSH key...
	I0906 12:22:37.175079    6027 main.go:141] libmachine: Creating Disk image...
	I0906 12:22:37.175087    6027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:22:37.175300    6027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0906 12:22:37.185137    6027 main.go:141] libmachine: STDOUT: 
	I0906 12:22:37.185157    6027 main.go:141] libmachine: STDERR: 
	I0906 12:22:37.185210    6027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2 +20000M
	I0906 12:22:37.193244    6027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:22:37.193264    6027 main.go:141] libmachine: STDERR: 
	I0906 12:22:37.193282    6027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0906 12:22:37.193289    6027 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:22:37.193303    6027 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:22:37.193330    6027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:f2:e5:f4:f9:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0906 12:22:37.194962    6027 main.go:141] libmachine: STDOUT: 
	I0906 12:22:37.194977    6027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:37.195004    6027 client.go:171] duration metric: took 414.5805ms to LocalClient.Create
	I0906 12:22:39.197197    6027 start.go:128] duration metric: took 2.450892667s to createHost
	I0906 12:22:39.197320    6027 start.go:83] releasing machines lock for "kubernetes-upgrade-140000", held for 2.451126958s
	W0906 12:22:39.197373    6027 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:39.204687    6027 out.go:177] * Deleting "kubernetes-upgrade-140000" in qemu2 ...
	W0906 12:22:39.251702    6027 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:39.251728    6027 start.go:729] Will try again in 5 seconds ...
	I0906 12:22:44.253096    6027 start.go:360] acquireMachinesLock for kubernetes-upgrade-140000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:44.253161    6027 start.go:364] duration metric: took 50.042µs to acquireMachinesLock for "kubernetes-upgrade-140000"
	I0906 12:22:44.253178    6027 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernet
es-upgrade-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:22:44.253215    6027 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:22:44.263236    6027 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:22:44.278712    6027 start.go:159] libmachine.API.Create for "kubernetes-upgrade-140000" (driver="qemu2")
	I0906 12:22:44.278735    6027 client.go:168] LocalClient.Create starting
	I0906 12:22:44.278788    6027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:22:44.278818    6027 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:44.278826    6027 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:44.278859    6027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:22:44.278881    6027 main.go:141] libmachine: Decoding PEM data...
	I0906 12:22:44.278887    6027 main.go:141] libmachine: Parsing certificate...
	I0906 12:22:44.279143    6027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:22:44.438404    6027 main.go:141] libmachine: Creating SSH key...
	I0906 12:22:44.722058    6027 main.go:141] libmachine: Creating Disk image...
	I0906 12:22:44.722066    6027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:22:44.722277    6027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0906 12:22:44.731578    6027 main.go:141] libmachine: STDOUT: 
	I0906 12:22:44.731609    6027 main.go:141] libmachine: STDERR: 
	I0906 12:22:44.731670    6027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2 +20000M
	I0906 12:22:44.739636    6027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:22:44.739650    6027 main.go:141] libmachine: STDERR: 
	I0906 12:22:44.739662    6027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0906 12:22:44.739667    6027 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:22:44.739677    6027 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:22:44.739710    6027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:73:dd:8e:7c:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0906 12:22:44.741406    6027 main.go:141] libmachine: STDOUT: 
	I0906 12:22:44.741424    6027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:44.741440    6027 client.go:171] duration metric: took 462.703375ms to LocalClient.Create
	I0906 12:22:46.743746    6027 start.go:128] duration metric: took 2.490473042s to createHost
	I0906 12:22:46.743861    6027 start.go:83] releasing machines lock for "kubernetes-upgrade-140000", held for 2.490707375s
	W0906 12:22:46.744258    6027 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:46.752692    6027 out.go:201] 
	W0906 12:22:46.760973    6027 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:22:46.761005    6027 out.go:270] * 
	* 
	W0906 12:22:46.763448    6027 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:22:46.773749    6027 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-140000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-140000: (2.048968792s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-140000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-140000 status --format={{.Host}}: exit status 7 (66.679084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.198214s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-140000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-140000" primary control-plane node in "kubernetes-upgrade-140000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-140000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-140000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:22:48.937059    6076 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:22:48.937179    6076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:48.937182    6076 out.go:358] Setting ErrFile to fd 2...
	I0906 12:22:48.937184    6076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:22:48.937300    6076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:22:48.938516    6076 out.go:352] Setting JSON to false
	I0906 12:22:48.955808    6076 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4938,"bootTime":1725645630,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:22:48.955889    6076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:22:48.961285    6076 out.go:177] * [kubernetes-upgrade-140000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:22:48.969284    6076 notify.go:220] Checking for updates...
	I0906 12:22:48.973239    6076 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:22:48.980264    6076 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:22:48.987210    6076 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:22:48.995324    6076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:22:48.998305    6076 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:22:49.002265    6076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:22:49.006369    6076 config.go:182] Loaded profile config "kubernetes-upgrade-140000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0906 12:22:49.006619    6076 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:22:49.009270    6076 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:22:49.016220    6076 start.go:297] selected driver: qemu2
	I0906 12:22:49.016227    6076 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-
upgrade-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:22:49.016276    6076 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:22:49.018760    6076 cni.go:84] Creating CNI manager for ""
	I0906 12:22:49.018779    6076 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:22:49.018803    6076 start.go:340] cluster config:
	{Name:kubernetes-upgrade-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client S
ocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:22:49.022512    6076 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:22:49.030228    6076 out.go:177] * Starting "kubernetes-upgrade-140000" primary control-plane node in "kubernetes-upgrade-140000" cluster
	I0906 12:22:49.034214    6076 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:22:49.034232    6076 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:22:49.034238    6076 cache.go:56] Caching tarball of preloaded images
	I0906 12:22:49.034296    6076 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:22:49.034301    6076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:22:49.034355    6076 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/kubernetes-upgrade-140000/config.json ...
	I0906 12:22:49.034690    6076 start.go:360] acquireMachinesLock for kubernetes-upgrade-140000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:49.034720    6076 start.go:364] duration metric: took 23.625µs to acquireMachinesLock for "kubernetes-upgrade-140000"
	I0906 12:22:49.034730    6076 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:22:49.034736    6076 fix.go:54] fixHost starting: 
	I0906 12:22:49.034863    6076 fix.go:112] recreateIfNeeded on kubernetes-upgrade-140000: state=Stopped err=<nil>
	W0906 12:22:49.034872    6076 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:22:49.042306    6076 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-140000" ...
	I0906 12:22:49.046206    6076 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:22:49.046255    6076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:73:dd:8e:7c:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0906 12:22:49.048444    6076 main.go:141] libmachine: STDOUT: 
	I0906 12:22:49.048544    6076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:49.048576    6076 fix.go:56] duration metric: took 13.84125ms for fixHost
	I0906 12:22:49.048582    6076 start.go:83] releasing machines lock for "kubernetes-upgrade-140000", held for 13.857375ms
	W0906 12:22:49.048591    6076 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:22:49.048633    6076 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:49.048638    6076 start.go:729] Will try again in 5 seconds ...
	I0906 12:22:54.049085    6076 start.go:360] acquireMachinesLock for kubernetes-upgrade-140000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:22:54.049553    6076 start.go:364] duration metric: took 363.709µs to acquireMachinesLock for "kubernetes-upgrade-140000"
	I0906 12:22:54.049721    6076 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:22:54.049743    6076 fix.go:54] fixHost starting: 
	I0906 12:22:54.050501    6076 fix.go:112] recreateIfNeeded on kubernetes-upgrade-140000: state=Stopped err=<nil>
	W0906 12:22:54.050528    6076 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:22:54.060693    6076 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-140000" ...
	I0906 12:22:54.063715    6076 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:22:54.063972    6076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:73:dd:8e:7c:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubernetes-upgrade-140000/disk.qcow2
	I0906 12:22:54.071095    6076 main.go:141] libmachine: STDOUT: 
	I0906 12:22:54.071155    6076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:22:54.071230    6076 fix.go:56] duration metric: took 21.489459ms for fixHost
	I0906 12:22:54.071249    6076 start.go:83] releasing machines lock for "kubernetes-upgrade-140000", held for 21.668916ms
	W0906 12:22:54.071468    6076 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-140000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-140000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:22:54.079768    6076 out.go:201] 
	W0906 12:22:54.082833    6076 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:22:54.082849    6076 out.go:270] * 
	* 
	W0906 12:22:54.084161    6076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:22:54.093643    6076 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-140000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-140000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-140000 version --output=json: exit status 1 (41.375584ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-140000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-09-06 12:22:54.145739 -0700 PDT m=+3258.763859667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-140000 -n kubernetes-upgrade-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-140000 -n kubernetes-upgrade-140000: exit status 7 (34.892792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-140000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-140000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-140000
--- FAIL: TestKubernetesUpgrade (17.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (611.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3120306353 start -p stopped-upgrade-236000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3120306353 start -p stopped-upgrade-236000 --memory=2200 --vm-driver=qemu2 : (1m18.396150708s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3120306353 -p stopped-upgrade-236000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3120306353 -p stopped-upgrade-236000 stop: (12.1024025s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-236000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0906 12:25:12.231936    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:27:09.136613    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:27:22.271132    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:32:09.136403    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:32:22.268837    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-236000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.120408375s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-236000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-236000" primary control-plane node in "stopped-upgrade-236000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-236000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:24:19.515683    6239 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:24:19.515801    6239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:24:19.515804    6239 out.go:358] Setting ErrFile to fd 2...
	I0906 12:24:19.515806    6239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:24:19.515948    6239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:24:19.517040    6239 out.go:352] Setting JSON to false
	I0906 12:24:19.534176    6239 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5029,"bootTime":1725645630,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:24:19.534248    6239 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:24:19.538698    6239 out.go:177] * [stopped-upgrade-236000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:24:19.545755    6239 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:24:19.545804    6239 notify.go:220] Checking for updates...
	I0906 12:24:19.551663    6239 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:24:19.557593    6239 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:24:19.560729    6239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:24:19.563707    6239 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:24:19.566696    6239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:24:19.569941    6239 config.go:182] Loaded profile config "stopped-upgrade-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0906 12:24:19.573663    6239 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 12:24:19.576623    6239 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:24:19.580674    6239 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:24:19.586653    6239 start.go:297] selected driver: qemu2
	I0906 12:24:19.586661    6239 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0906 12:24:19.586747    6239 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:24:19.589178    6239 cni.go:84] Creating CNI manager for ""
	I0906 12:24:19.589197    6239 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:24:19.589223    6239 start.go:340] cluster config:
	{Name:stopped-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0906 12:24:19.589272    6239 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:24:19.596673    6239 out.go:177] * Starting "stopped-upgrade-236000" primary control-plane node in "stopped-upgrade-236000" cluster
	I0906 12:24:19.600666    6239 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0906 12:24:19.600679    6239 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0906 12:24:19.600686    6239 cache.go:56] Caching tarball of preloaded images
	I0906 12:24:19.600733    6239 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:24:19.600738    6239 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0906 12:24:19.600796    6239 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/config.json ...
	I0906 12:24:19.601145    6239 start.go:360] acquireMachinesLock for stopped-upgrade-236000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:24:19.601179    6239 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "stopped-upgrade-236000"
	I0906 12:24:19.601189    6239 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:24:19.601193    6239 fix.go:54] fixHost starting: 
	I0906 12:24:19.601306    6239 fix.go:112] recreateIfNeeded on stopped-upgrade-236000: state=Stopped err=<nil>
	W0906 12:24:19.601315    6239 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:24:19.605668    6239 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-236000" ...
	I0906 12:24:19.613650    6239 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:24:19.613717    6239 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50256-:22,hostfwd=tcp::50257-:2376,hostname=stopped-upgrade-236000 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/disk.qcow2
	I0906 12:24:19.658234    6239 main.go:141] libmachine: STDOUT: 
	I0906 12:24:19.658269    6239 main.go:141] libmachine: STDERR: 
	I0906 12:24:19.658274    6239 main.go:141] libmachine: Waiting for VM to start (ssh -p 50256 docker@127.0.0.1)...
	I0906 12:24:38.910760    6239 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/config.json ...
	I0906 12:24:38.911624    6239 machine.go:93] provisionDockerMachine start ...
	I0906 12:24:38.911822    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:38.912434    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:38.912450    6239 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:24:38.991296    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:24:38.991324    6239 buildroot.go:166] provisioning hostname "stopped-upgrade-236000"
	I0906 12:24:38.991450    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:38.991665    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:38.991674    6239 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-236000 && echo "stopped-upgrade-236000" | sudo tee /etc/hostname
	I0906 12:24:39.065044    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-236000
	
	I0906 12:24:39.065104    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:39.065257    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:39.065267    6239 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-236000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-236000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-236000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:24:39.134095    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:24:39.134108    6239 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-2143/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-2143/.minikube}
	I0906 12:24:39.134118    6239 buildroot.go:174] setting up certificates
	I0906 12:24:39.134123    6239 provision.go:84] configureAuth start
	I0906 12:24:39.134133    6239 provision.go:143] copyHostCerts
	I0906 12:24:39.134217    6239 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem, removing ...
	I0906 12:24:39.134227    6239 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem
	I0906 12:24:39.134355    6239 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.pem (1082 bytes)
	I0906 12:24:39.134550    6239 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem, removing ...
	I0906 12:24:39.134555    6239 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem
	I0906 12:24:39.134610    6239 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/cert.pem (1123 bytes)
	I0906 12:24:39.134728    6239 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem, removing ...
	I0906 12:24:39.134732    6239 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem
	I0906 12:24:39.134788    6239 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-2143/.minikube/key.pem (1675 bytes)
	I0906 12:24:39.134906    6239 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-236000 san=[127.0.0.1 localhost minikube stopped-upgrade-236000]
	I0906 12:24:39.263625    6239 provision.go:177] copyRemoteCerts
	I0906 12:24:39.263667    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:24:39.263675    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	I0906 12:24:39.297209    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 12:24:39.303976    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0906 12:24:39.310773    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:24:39.317979    6239 provision.go:87] duration metric: took 183.8525ms to configureAuth
	I0906 12:24:39.317988    6239 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:24:39.318082    6239 config.go:182] Loaded profile config "stopped-upgrade-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0906 12:24:39.318121    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:39.318201    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:39.318205    6239 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:24:39.379818    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:24:39.379829    6239 buildroot.go:70] root file system type: tmpfs
	I0906 12:24:39.379883    6239 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:24:39.379936    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:39.380059    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:39.380097    6239 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:24:39.446858    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:24:39.446912    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:39.447030    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:39.447046    6239 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:24:39.816920    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:24:39.816935    6239 machine.go:96] duration metric: took 905.304833ms to provisionDockerMachine
	I0906 12:24:39.816942    6239 start.go:293] postStartSetup for "stopped-upgrade-236000" (driver="qemu2")
	I0906 12:24:39.816950    6239 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:24:39.817004    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:24:39.817014    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	I0906 12:24:39.852685    6239 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:24:39.853966    6239 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 12:24:39.853975    6239 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-2143/.minikube/addons for local assets ...
	I0906 12:24:39.854066    6239 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-2143/.minikube/files for local assets ...
	I0906 12:24:39.854181    6239 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem -> 26722.pem in /etc/ssl/certs
	I0906 12:24:39.854307    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:24:39.857329    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem --> /etc/ssl/certs/26722.pem (1708 bytes)
	I0906 12:24:39.864022    6239 start.go:296] duration metric: took 47.073333ms for postStartSetup
	I0906 12:24:39.864043    6239 fix.go:56] duration metric: took 20.262996417s for fixHost
	I0906 12:24:39.864081    6239 main.go:141] libmachine: Using SSH client type: native
	I0906 12:24:39.864188    6239 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012b05a0] 0x1012b2e00 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0906 12:24:39.864192    6239 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:24:39.923300    6239 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725650679.447463379
	
	I0906 12:24:39.923308    6239 fix.go:216] guest clock: 1725650679.447463379
	I0906 12:24:39.923311    6239 fix.go:229] Guest: 2024-09-06 12:24:39.447463379 -0700 PDT Remote: 2024-09-06 12:24:39.864045 -0700 PDT m=+20.368479293 (delta=-416.581621ms)
	I0906 12:24:39.923323    6239 fix.go:200] guest clock delta is within tolerance: -416.581621ms
	I0906 12:24:39.923326    6239 start.go:83] releasing machines lock for "stopped-upgrade-236000", held for 20.322288792s
	I0906 12:24:39.923387    6239 ssh_runner.go:195] Run: cat /version.json
	I0906 12:24:39.923391    6239 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:24:39.923396    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	I0906 12:24:39.923407    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	W0906 12:24:39.923988    6239 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50256: connect: connection refused
	I0906 12:24:39.924010    6239 retry.go:31] will retry after 183.070329ms: dial tcp [::1]:50256: connect: connection refused
	W0906 12:24:40.142165    6239 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0906 12:24:40.142232    6239 ssh_runner.go:195] Run: systemctl --version
	I0906 12:24:40.144373    6239 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:24:40.146213    6239 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:24:40.146241    6239 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0906 12:24:40.149588    6239 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0906 12:24:40.154781    6239 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:24:40.154793    6239 start.go:495] detecting cgroup driver to use...
	I0906 12:24:40.154864    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:24:40.162025    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0906 12:24:40.165379    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:24:40.168603    6239 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:24:40.168635    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:24:40.171731    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:24:40.174629    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:24:40.178085    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:24:40.181304    6239 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:24:40.184539    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:24:40.187372    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:24:40.190424    6239 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:24:40.193662    6239 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:24:40.196625    6239 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:24:40.199162    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:40.278071    6239 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:24:40.284381    6239 start.go:495] detecting cgroup driver to use...
	I0906 12:24:40.284461    6239 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:24:40.290189    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:24:40.295887    6239 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:24:40.305643    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:24:40.310124    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:24:40.314954    6239 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:24:40.345846    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:24:40.350707    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:24:40.356070    6239 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:24:40.357456    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:24:40.360258    6239 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0906 12:24:40.365401    6239 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:24:40.447590    6239 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:24:40.518115    6239 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:24:40.518177    6239 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:24:40.523295    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:40.601195    6239 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:24:41.759830    6239 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158622709s)
	I0906 12:24:41.759908    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:24:41.764543    6239 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:24:41.772117    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:24:41.776466    6239 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:24:41.853261    6239 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:24:41.925814    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:42.006856    6239 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:24:42.012813    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:24:42.017448    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:42.095006    6239 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:24:42.133651    6239 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:24:42.133726    6239 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:24:42.137044    6239 start.go:563] Will wait 60s for crictl version
	I0906 12:24:42.137091    6239 ssh_runner.go:195] Run: which crictl
	I0906 12:24:42.138321    6239 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:24:42.153080    6239 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0906 12:24:42.153158    6239 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:24:42.169382    6239 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:24:42.195275    6239 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0906 12:24:42.195342    6239 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0906 12:24:42.196521    6239 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:24:42.199879    6239 kubeadm.go:883] updating cluster {Name:stopped-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-236000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0906 12:24:42.199921    6239 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0906 12:24:42.199962    6239 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:24:42.210242    6239 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 12:24:42.210250    6239 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0906 12:24:42.210298    6239 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:24:42.213827    6239 ssh_runner.go:195] Run: which lz4
	I0906 12:24:42.215297    6239 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 12:24:42.216524    6239 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 12:24:42.216535    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0906 12:24:43.162451    6239 docker.go:649] duration metric: took 947.18675ms to copy over tarball
	I0906 12:24:43.162506    6239 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 12:24:44.324756    6239 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162245375s)
	I0906 12:24:44.324773    6239 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 12:24:44.340054    6239 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 12:24:44.342919    6239 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0906 12:24:44.348122    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:44.434175    6239 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:24:45.933141    6239 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.498960292s)
	I0906 12:24:45.933224    6239 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:24:45.945273    6239 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 12:24:45.945282    6239 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0906 12:24:45.945288    6239 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 12:24:45.949081    6239 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:45.950705    6239 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:45.952685    6239 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:45.952825    6239 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:45.953457    6239 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:45.953732    6239 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:45.954717    6239 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:45.956317    6239 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:45.956427    6239 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:45.958289    6239 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0906 12:24:45.958360    6239 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:45.958381    6239 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:45.958881    6239 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:45.959427    6239 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:45.960453    6239 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0906 12:24:45.961031    6239 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:46.342345    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:46.356147    6239 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0906 12:24:46.356168    6239 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:46.356223    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0906 12:24:46.366352    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0906 12:24:46.381539    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:46.386507    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:46.387795    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:46.392486    6239 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0906 12:24:46.392512    6239 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:46.392564    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0906 12:24:46.400884    6239 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0906 12:24:46.400905    6239 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:46.400955    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0906 12:24:46.406689    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:46.407134    6239 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0906 12:24:46.407151    6239 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:46.407177    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0906 12:24:46.412824    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0906 12:24:46.419256    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0906 12:24:46.422512    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0906 12:24:46.429878    6239 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0906 12:24:46.429902    6239 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0906 12:24:46.429951    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0906 12:24:46.429955    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0906 12:24:46.441346    6239 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0906 12:24:46.441477    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:46.443481    6239 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0906 12:24:46.443499    6239 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0906 12:24:46.443527    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0906 12:24:46.448098    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0906 12:24:46.455857    6239 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0906 12:24:46.455883    6239 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:46.455935    6239 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0906 12:24:46.456891    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0906 12:24:46.456997    6239 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0906 12:24:46.466537    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0906 12:24:46.466634    6239 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0906 12:24:46.466645    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0906 12:24:46.466660    6239 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0906 12:24:46.469054    6239 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0906 12:24:46.469069    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0906 12:24:46.482220    6239 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0906 12:24:46.482240    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0906 12:24:46.523314    6239 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0906 12:24:46.529677    6239 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0906 12:24:46.529690    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0906 12:24:46.568707    6239 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0906 12:24:46.725504    6239 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0906 12:24:46.725693    6239 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:46.745408    6239 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0906 12:24:46.745446    6239 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:46.745513    6239 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:24:46.760911    6239 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 12:24:46.761028    6239 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0906 12:24:46.762474    6239 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0906 12:24:46.762484    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0906 12:24:46.790529    6239 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0906 12:24:46.790544    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0906 12:24:47.021794    6239 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0906 12:24:47.021835    6239 cache_images.go:92] duration metric: took 1.076548917s to LoadCachedImages
	W0906 12:24:47.021872    6239 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0906 12:24:47.021878    6239 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0906 12:24:47.021936    6239 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-236000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:24:47.021995    6239 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:24:47.035540    6239 cni.go:84] Creating CNI manager for ""
	I0906 12:24:47.035554    6239 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:24:47.035563    6239 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:24:47.035572    6239 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-236000 NodeName:stopped-upgrade-236000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:24:47.035645    6239 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-236000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:24:47.035709    6239 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0906 12:24:47.038542    6239 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:24:47.038576    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 12:24:47.041533    6239 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0906 12:24:47.046684    6239 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:24:47.051901    6239 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0906 12:24:47.056971    6239 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0906 12:24:47.058239    6239 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:24:47.062218    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:24:47.141391    6239 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:24:47.147170    6239 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000 for IP: 10.0.2.15
	I0906 12:24:47.147178    6239 certs.go:194] generating shared ca certs ...
	I0906 12:24:47.147192    6239 certs.go:226] acquiring lock for ca certs: {Name:mkeb2acf337d35e5b807329b963b0c0723ad2fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:47.147346    6239 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key
	I0906 12:24:47.147396    6239 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key
	I0906 12:24:47.147404    6239 certs.go:256] generating profile certs ...
	I0906 12:24:47.147479    6239 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/client.key
	I0906 12:24:47.147498    6239 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key.74969ff6
	I0906 12:24:47.147512    6239 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt.74969ff6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0906 12:24:47.236019    6239 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt.74969ff6 ...
	I0906 12:24:47.236037    6239 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt.74969ff6: {Name:mke61c1e49c05f6676b28fae907efded9d9fb0e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:47.237384    6239 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key.74969ff6 ...
	I0906 12:24:47.237390    6239 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key.74969ff6: {Name:mk6e11fa94f9059d5bb968b331725636129e1469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:47.237551    6239 certs.go:381] copying /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt.74969ff6 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt
	I0906 12:24:47.237708    6239 certs.go:385] copying /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key.74969ff6 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key
	I0906 12:24:47.237879    6239 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/proxy-client.key
	I0906 12:24:47.238019    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672.pem (1338 bytes)
	W0906 12:24:47.238049    6239 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672_empty.pem, impossibly tiny 0 bytes
	I0906 12:24:47.238055    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:24:47.238075    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem (1082 bytes)
	I0906 12:24:47.238098    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:24:47.238126    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/key.pem (1675 bytes)
	I0906 12:24:47.238167    6239 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem (1708 bytes)
	I0906 12:24:47.238499    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:24:47.245755    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 12:24:47.252445    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:24:47.259313    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 12:24:47.266312    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0906 12:24:47.273874    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0906 12:24:47.281303    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:24:47.288141    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:24:47.294882    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/ssl/certs/26722.pem --> /usr/share/ca-certificates/26722.pem (1708 bytes)
	I0906 12:24:47.302154    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:24:47.309499    6239 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/2672.pem --> /usr/share/ca-certificates/2672.pem (1338 bytes)
	I0906 12:24:47.316959    6239 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:24:47.322126    6239 ssh_runner.go:195] Run: openssl version
	I0906 12:24:47.324072    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26722.pem && ln -fs /usr/share/ca-certificates/26722.pem /etc/ssl/certs/26722.pem"
	I0906 12:24:47.326999    6239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26722.pem
	I0906 12:24:47.328416    6239 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:44 /usr/share/ca-certificates/26722.pem
	I0906 12:24:47.328435    6239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26722.pem
	I0906 12:24:47.330119    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/26722.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:24:47.333570    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:24:47.336840    6239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:24:47.338285    6239 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:24:47.338301    6239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:24:47.340062    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:24:47.342916    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2672.pem && ln -fs /usr/share/ca-certificates/2672.pem /etc/ssl/certs/2672.pem"
	I0906 12:24:47.346384    6239 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2672.pem
	I0906 12:24:47.347773    6239 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:44 /usr/share/ca-certificates/2672.pem
	I0906 12:24:47.347796    6239 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2672.pem
	I0906 12:24:47.349430    6239 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2672.pem /etc/ssl/certs/51391683.0"
	I0906 12:24:47.352479    6239 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:24:47.353985    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:24:47.356035    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:24:47.357898    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:24:47.359879    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:24:47.361635    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:24:47.363733    6239 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:24:47.365576    6239 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50331 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-236000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0906 12:24:47.365650    6239 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:24:47.377455    6239 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:24:47.380696    6239 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:24:47.380702    6239 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:24:47.380727    6239 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:24:47.383687    6239 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:24:47.383967    6239 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-236000" does not appear in /Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:24:47.384066    6239 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-2143/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-236000" cluster setting kubeconfig missing "stopped-upgrade-236000" context setting]
	I0906 12:24:47.384280    6239 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/kubeconfig: {Name:mkb103f2b581179fd959f22a1dc4c9c6720f9366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:24:47.384946    6239 kapi.go:59] client config for stopped-upgrade-236000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10286bf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:24:47.385267    6239 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:24:47.387932    6239 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-236000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0906 12:24:47.387937    6239 kubeadm.go:1160] stopping kube-system containers ...
	I0906 12:24:47.387974    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:24:47.400463    6239 docker.go:483] Stopping containers: [6c0684138801 b31953704fbe d586e13d97c8 c859fcd79335 f1e7479bac8f 281e80785bbc 844d4edf7d83 581e8a4e86d3]
	I0906 12:24:47.400524    6239 ssh_runner.go:195] Run: docker stop 6c0684138801 b31953704fbe d586e13d97c8 c859fcd79335 f1e7479bac8f 281e80785bbc 844d4edf7d83 581e8a4e86d3
	I0906 12:24:47.412103    6239 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 12:24:47.417546    6239 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:24:47.420283    6239 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:24:47.420288    6239 kubeadm.go:157] found existing configuration files:
	
	I0906 12:24:47.420317    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/admin.conf
	I0906 12:24:47.422579    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:24:47.422597    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 12:24:47.425590    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/kubelet.conf
	I0906 12:24:47.428301    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:24:47.428323    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 12:24:47.430977    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/controller-manager.conf
	I0906 12:24:47.433887    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:24:47.433916    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 12:24:47.436766    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/scheduler.conf
	I0906 12:24:47.439188    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:24:47.439211    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 12:24:47.442256    6239 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:24:47.445467    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:47.469333    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:47.780018    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:47.913137    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:47.946001    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:24:47.978973    6239 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:24:47.979068    6239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:24:48.481161    6239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:24:48.981091    6239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:24:48.985146    6239 api_server.go:72] duration metric: took 1.00618275s to wait for apiserver process to appear ...
	I0906 12:24:48.985155    6239 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:24:48.985164    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:53.987289    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:53.987333    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:24:58.987644    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:24:58.987684    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:03.988463    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:03.988517    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:08.989225    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:08.989266    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:13.990041    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:13.990058    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:18.990991    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:18.991038    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:23.992415    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:23.992474    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:28.994282    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:28.994342    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:33.996836    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:33.996886    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:38.997945    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:38.997993    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:44.000279    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:44.000305    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:49.002517    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:49.002781    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:49.032018    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:25:49.032143    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:49.050150    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:25:49.050243    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:49.064357    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:25:49.064432    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:49.076397    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:25:49.076459    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:49.086631    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:25:49.086697    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:49.097407    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:25:49.097468    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:49.107414    6239 logs.go:276] 0 containers: []
	W0906 12:25:49.107424    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:49.107495    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:49.117689    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:25:49.117707    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:49.117713    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:49.203344    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:25:49.203358    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:25:49.215595    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:25:49.215607    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:25:49.236725    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:49.236739    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:49.262898    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:49.262907    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:49.301311    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:49.301321    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:49.305496    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:25:49.305503    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:25:49.319103    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:25:49.319113    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:25:49.330598    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:25:49.330611    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:25:49.343898    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:25:49.343911    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:49.356441    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:25:49.356455    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:25:49.370961    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:25:49.370971    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:25:49.412628    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:25:49.412639    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:25:49.428101    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:25:49.428112    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:25:49.441469    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:25:49.441481    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:25:49.456856    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:25:49.456867    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:25:49.476848    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:25:49.476859    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:25:51.989540    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:25:56.990516    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:25:56.990685    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:25:57.015806    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:25:57.015924    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:25:57.032655    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:25:57.032739    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:25:57.049146    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:25:57.049217    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:25:57.060505    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:25:57.060581    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:25:57.071581    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:25:57.071642    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:25:57.082168    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:25:57.082225    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:25:57.092767    6239 logs.go:276] 0 containers: []
	W0906 12:25:57.092793    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:25:57.092856    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:25:57.103793    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:25:57.103811    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:25:57.103817    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:25:57.115326    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:25:57.115336    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:25:57.130654    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:25:57.130663    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:25:57.168894    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:25:57.168904    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:25:57.182443    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:25:57.182454    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:25:57.220527    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:25:57.220553    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:25:57.235066    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:25:57.235077    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:25:57.239219    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:25:57.239227    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:25:57.256483    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:25:57.256495    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:25:57.268107    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:25:57.268120    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:25:57.279784    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:25:57.279799    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:25:57.318318    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:25:57.318331    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:25:57.332760    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:25:57.332777    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:25:57.351388    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:25:57.351398    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:25:57.375837    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:25:57.375848    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:25:57.388250    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:25:57.388265    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:25:57.400940    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:25:57.400954    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:25:59.913747    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:04.916293    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:04.916628    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:04.952738    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:04.952878    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:04.973355    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:04.973448    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:04.993204    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:04.993276    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:05.005450    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:05.005525    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:05.016184    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:05.016241    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:05.026701    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:05.026777    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:05.037034    6239 logs.go:276] 0 containers: []
	W0906 12:26:05.037046    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:05.037103    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:05.048342    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:05.048362    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:05.048368    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:05.052784    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:05.052795    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:05.066067    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:05.066081    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:05.084076    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:05.084086    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:05.096268    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:05.096279    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:05.134672    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:05.134683    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:05.148303    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:05.148313    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:05.159730    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:05.159742    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:05.171382    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:05.171393    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:05.208897    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:05.208905    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:05.244523    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:05.244534    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:05.256651    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:05.256668    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:05.268276    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:05.268290    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:05.292037    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:05.292047    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:05.311355    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:05.311367    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:05.325655    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:05.325668    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:05.338412    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:05.338423    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:07.855331    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:12.857715    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:12.857888    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:12.872199    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:12.872277    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:12.884780    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:12.884846    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:12.895336    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:12.895404    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:12.906068    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:12.906141    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:12.916886    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:12.916954    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:12.927108    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:12.927178    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:12.938079    6239 logs.go:276] 0 containers: []
	W0906 12:26:12.938092    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:12.938152    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:12.948653    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:12.948672    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:12.948679    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:12.959787    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:12.959799    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:12.974510    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:12.974523    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:12.993545    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:12.993556    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:13.006076    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:13.006087    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:13.029372    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:13.029380    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:13.043542    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:13.043555    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:13.059168    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:13.059178    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:13.073336    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:13.073346    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:13.094275    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:13.094287    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:13.098643    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:13.098651    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:13.115387    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:13.115400    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:13.153184    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:13.153195    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:13.166675    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:13.166687    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:13.179282    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:13.179295    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:13.190949    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:13.190962    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:13.228223    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:13.228236    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:15.764439    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:20.766783    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:20.766903    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:20.779359    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:20.779438    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:20.789938    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:20.790001    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:20.800264    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:20.800330    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:20.810603    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:20.810697    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:20.821444    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:20.821507    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:20.832070    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:20.832129    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:20.842237    6239 logs.go:276] 0 containers: []
	W0906 12:26:20.842248    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:20.842296    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:20.852518    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:20.852532    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:20.852537    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:20.869557    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:20.869570    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:20.881084    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:20.881099    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:20.906705    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:20.906713    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:20.945435    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:20.945446    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:20.959458    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:20.959469    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:20.970762    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:20.970776    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:20.984485    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:20.984499    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:20.999271    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:20.999281    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:21.003317    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:21.003326    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:21.016795    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:21.016810    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:21.056281    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:21.056292    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:21.068168    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:21.068178    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:21.080904    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:21.080917    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:21.092104    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:21.092115    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:21.131827    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:21.131838    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:21.146800    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:21.146815    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:23.660665    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:28.663282    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:28.663465    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:28.686026    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:28.686141    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:28.700479    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:28.700556    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:28.711794    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:28.711870    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:28.724007    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:28.724071    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:28.734643    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:28.734706    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:28.745292    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:28.745349    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:28.756876    6239 logs.go:276] 0 containers: []
	W0906 12:26:28.756889    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:28.756944    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:28.768545    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:28.768563    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:28.768569    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:28.793360    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:28.793368    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:28.807698    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:28.807714    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:28.821671    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:28.821682    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:28.833324    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:28.833334    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:28.846537    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:28.846548    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:28.857884    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:28.857896    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:28.870022    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:28.870035    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:28.881341    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:28.881352    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:28.896398    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:28.896411    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:28.908239    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:28.908250    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:28.946572    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:28.946588    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:28.961096    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:28.961107    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:28.974467    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:28.974480    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:28.992265    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:28.992281    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:29.029657    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:29.029667    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:29.034218    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:29.034226    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:31.579453    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:36.580498    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:36.580735    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:36.606696    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:36.606826    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:36.624395    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:36.624469    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:36.637737    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:36.637799    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:36.649565    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:36.649639    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:36.660302    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:36.660374    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:36.670685    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:36.670750    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:36.680592    6239 logs.go:276] 0 containers: []
	W0906 12:26:36.680606    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:36.680665    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:36.691053    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:36.691070    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:36.691076    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:36.704974    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:36.704985    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:36.716467    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:36.716476    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:36.740446    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:36.740453    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:36.744429    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:36.744435    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:36.758769    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:36.758781    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:36.777444    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:36.777454    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:36.795314    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:36.795325    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:36.835113    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:36.835128    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:36.847604    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:36.847618    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:36.878214    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:36.878229    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:36.892851    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:36.892860    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:36.930807    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:36.930820    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:36.944410    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:36.944421    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:36.957605    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:36.957618    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:36.976003    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:36.976017    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:36.988170    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:36.988185    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:39.526091    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:44.528445    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:44.528641    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:44.552742    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:44.552861    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:44.569976    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:44.570056    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:44.582461    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:44.582532    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:44.593630    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:44.593701    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:44.606180    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:44.606246    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:44.617099    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:44.617157    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:44.627765    6239 logs.go:276] 0 containers: []
	W0906 12:26:44.627780    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:44.627839    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:44.638399    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:44.638424    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:44.638429    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:44.652539    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:44.652550    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:44.667473    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:44.667483    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:44.680150    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:44.680162    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:44.720363    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:44.720375    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:44.756039    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:44.756052    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:44.768385    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:44.768396    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:44.781376    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:44.781388    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:44.793426    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:44.793439    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:44.808352    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:44.808362    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:44.826090    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:44.826101    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:44.830636    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:44.830646    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:44.848070    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:44.848082    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:44.886354    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:44.886372    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:44.902444    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:44.902458    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:44.914436    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:44.914448    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:44.937725    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:44.937732    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:47.451810    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:26:52.454167    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:26:52.454353    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:26:52.470935    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:26:52.471022    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:26:52.485021    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:26:52.485093    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:26:52.496353    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:26:52.496419    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:26:52.507019    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:26:52.507086    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:26:52.517661    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:26:52.517728    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:26:52.531338    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:26:52.531403    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:26:52.541585    6239 logs.go:276] 0 containers: []
	W0906 12:26:52.541601    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:26:52.541657    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:26:52.552320    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:26:52.552339    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:26:52.552345    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:26:52.586923    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:26:52.586935    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:26:52.601197    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:26:52.601211    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:26:52.615083    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:26:52.615095    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:26:52.626041    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:26:52.626052    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:26:52.630429    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:26:52.630434    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:26:52.669893    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:26:52.669906    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:26:52.684402    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:26:52.684415    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:26:52.695570    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:26:52.695581    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:26:52.707776    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:26:52.707786    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:26:52.723301    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:26:52.723311    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:26:52.736847    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:26:52.736857    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:26:52.777059    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:26:52.777071    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:26:52.791348    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:26:52.791359    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:26:52.809800    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:26:52.809812    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:26:52.820535    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:26:52.820546    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:26:52.843869    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:26:52.843877    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:26:55.357311    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:00.359925    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:00.360142    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:00.387324    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:00.387442    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:00.411442    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:00.411508    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:00.424692    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:00.424754    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:00.436024    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:00.436090    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:00.446735    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:00.446799    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:00.460560    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:00.460628    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:00.471235    6239 logs.go:276] 0 containers: []
	W0906 12:27:00.471247    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:00.471308    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:00.481963    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:00.481980    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:00.481986    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:00.493117    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:00.493127    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:00.518444    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:00.518452    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:00.534016    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:00.534030    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:00.572314    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:00.572325    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:00.587054    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:00.587065    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:00.604337    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:00.604348    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:00.615901    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:00.615916    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:00.627826    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:00.627837    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:00.640719    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:00.640730    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:00.645623    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:00.645630    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:00.679924    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:00.679934    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:00.699683    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:00.699694    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:00.717435    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:00.717446    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:00.756876    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:00.756887    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:00.771203    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:00.771213    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:00.783166    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:00.783176    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:03.297723    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:08.300124    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:08.300246    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:08.310940    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:08.311019    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:08.321551    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:08.321618    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:08.332618    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:08.332684    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:08.342798    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:08.342861    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:08.353033    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:08.353101    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:08.363183    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:08.363249    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:08.373409    6239 logs.go:276] 0 containers: []
	W0906 12:27:08.373422    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:08.373470    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:08.384200    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:08.384220    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:08.384226    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:08.401476    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:08.401487    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:08.416745    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:08.416758    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:08.428937    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:08.428949    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:08.448258    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:08.448270    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:08.463188    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:08.463201    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:08.474432    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:08.474447    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:08.489364    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:08.489375    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:08.504069    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:08.504079    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:08.515349    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:08.515359    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:08.527266    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:08.527278    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:08.541322    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:08.541332    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:08.575184    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:08.575198    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:08.613396    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:08.613406    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:08.625121    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:08.625131    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:08.648772    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:08.648780    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:08.687544    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:08.687554    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:11.193649    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:16.195786    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:16.195880    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:16.207189    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:16.207267    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:16.217332    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:16.217405    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:16.227919    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:16.227976    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:16.238405    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:16.238475    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:16.248996    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:16.249052    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:16.259498    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:16.259568    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:16.271651    6239 logs.go:276] 0 containers: []
	W0906 12:27:16.271663    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:16.271713    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:16.282235    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:16.282253    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:16.282258    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:16.319332    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:16.319340    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:16.353942    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:16.353953    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:16.367992    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:16.368004    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:16.379325    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:16.379338    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:16.416437    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:16.416446    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:16.427833    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:16.427844    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:16.439398    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:16.439410    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:16.451468    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:16.451478    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:16.463984    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:16.463995    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:16.473058    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:16.473066    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:16.486877    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:16.486890    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:16.499673    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:16.499686    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:16.514195    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:16.514206    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:16.536856    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:16.536867    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:16.550326    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:16.550337    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:16.564772    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:16.564783    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:19.091295    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:24.093761    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:24.094080    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:24.127919    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:24.128044    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:24.147039    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:24.147136    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:24.167029    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:24.167111    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:24.179377    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:24.179459    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:24.200873    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:24.200946    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:24.212705    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:24.212776    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:24.223231    6239 logs.go:276] 0 containers: []
	W0906 12:27:24.223243    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:24.223297    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:24.234140    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:24.234157    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:24.234165    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:24.271365    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:24.271377    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:24.285325    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:24.285337    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:24.299858    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:24.299869    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:24.310681    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:24.310693    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:24.347919    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:24.347930    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:24.352345    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:24.352351    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:24.364141    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:24.364150    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:24.378314    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:24.378327    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:24.392157    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:24.392169    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:24.406358    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:24.406372    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:24.417865    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:24.417876    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:24.453598    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:24.453610    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:24.472686    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:24.472697    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:24.490854    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:24.490866    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:24.513427    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:24.513436    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:24.528130    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:24.528144    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:27.041644    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:32.044000    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:32.044158    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:32.064398    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:32.064497    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:32.080804    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:32.080882    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:32.094433    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:32.094504    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:32.105408    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:32.105486    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:32.115889    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:32.115962    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:32.126678    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:32.126739    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:32.139741    6239 logs.go:276] 0 containers: []
	W0906 12:27:32.139756    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:32.139806    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:32.150575    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:32.150594    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:32.150601    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:32.173921    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:32.173929    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:32.210085    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:32.210099    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:32.224302    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:32.224313    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:32.241881    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:32.241892    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:32.254678    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:32.254693    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:32.269207    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:32.269217    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:32.287163    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:32.287178    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:32.299792    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:32.299803    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:32.311980    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:32.311993    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:32.316715    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:32.316722    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:32.356044    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:32.356057    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:32.372265    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:32.372276    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:32.393075    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:32.393089    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:32.432715    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:32.432725    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:32.453340    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:32.453352    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:32.467720    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:32.467730    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:34.981130    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:39.983439    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:39.983760    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:40.020062    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:40.020189    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:40.037528    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:40.037605    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:40.053814    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:40.053893    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:40.065142    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:40.065209    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:40.078270    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:40.078335    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:40.089259    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:40.089324    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:40.099344    6239 logs.go:276] 0 containers: []
	W0906 12:27:40.099357    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:40.099412    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:40.109866    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:40.109885    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:40.109890    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:40.149317    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:40.149328    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:40.183675    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:40.183689    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:40.195709    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:40.195723    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:40.206993    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:40.207005    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:40.219212    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:40.219226    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:40.233121    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:40.233135    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:40.248216    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:40.248227    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:40.261386    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:40.261398    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:40.294957    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:40.294969    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:40.307361    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:40.307376    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:40.319873    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:40.319886    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:40.330764    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:40.330775    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:40.334795    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:40.334803    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:40.348490    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:40.348504    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:40.387401    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:40.387412    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:40.404308    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:40.404318    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:42.929334    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:47.931675    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:47.932070    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:47.973368    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:47.973496    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:47.999691    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:47.999797    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:48.014248    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:48.014324    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:48.027740    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:48.027804    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:48.038417    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:48.038493    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:48.049548    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:48.049620    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:48.060031    6239 logs.go:276] 0 containers: []
	W0906 12:27:48.060043    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:48.060101    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:48.071922    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:48.071943    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:48.071950    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:48.076669    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:48.076678    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:48.095061    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:48.095072    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:48.108264    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:48.108275    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:48.123175    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:48.123187    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:48.135082    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:48.135093    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:48.146671    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:48.146685    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:48.158104    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:48.158116    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:48.198241    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:48.198254    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:48.217613    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:48.217623    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:48.229138    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:48.229149    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:48.247315    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:48.247327    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:48.260732    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:48.260743    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:48.294654    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:48.294666    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:48.334814    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:48.334835    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:48.349117    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:48.349130    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:48.360825    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:48.360837    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:50.886534    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:27:55.889154    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:27:55.889402    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:27:55.918344    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:27:55.918465    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:27:55.936710    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:27:55.936798    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:27:55.954733    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:27:55.954804    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:27:55.965815    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:27:55.965885    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:27:55.976015    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:27:55.976082    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:27:55.986478    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:27:55.986545    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:27:55.996436    6239 logs.go:276] 0 containers: []
	W0906 12:27:55.996448    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:27:55.996507    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:27:56.010652    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:27:56.010672    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:27:56.010678    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:27:56.025238    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:27:56.025253    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:27:56.029798    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:27:56.029805    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:27:56.043734    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:27:56.043745    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:27:56.055253    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:27:56.055265    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:27:56.069762    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:27:56.069772    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:27:56.082400    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:27:56.082410    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:27:56.120351    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:27:56.120362    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:27:56.133951    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:27:56.133963    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:27:56.145088    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:27:56.145100    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:27:56.163228    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:27:56.163238    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:27:56.174379    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:27:56.174391    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:27:56.197078    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:27:56.197087    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:27:56.235052    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:27:56.235063    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:27:56.269903    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:27:56.269915    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:27:56.282146    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:27:56.282159    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:27:56.298849    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:27:56.298863    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:27:58.813222    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:03.815519    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:03.815783    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:03.843059    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:03.843180    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:03.860346    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:03.860419    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:03.873481    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:03.873558    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:03.885184    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:03.885249    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:03.896026    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:03.896090    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:03.906338    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:03.906405    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:03.915995    6239 logs.go:276] 0 containers: []
	W0906 12:28:03.916008    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:03.916066    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:03.926345    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:03.926364    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:03.926370    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:03.964903    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:03.964915    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:03.982997    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:03.983010    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:03.994390    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:03.994402    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:04.011038    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:04.011051    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:04.028914    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:04.028926    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:04.046406    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:04.046418    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:04.059121    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:04.059134    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:04.073703    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:04.073713    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:04.085179    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:04.085190    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:04.097673    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:04.097685    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:04.108979    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:04.108991    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:04.133447    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:04.133454    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:04.171682    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:04.171691    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:04.175675    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:04.175683    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:04.214207    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:04.214218    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:04.228227    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:04.228241    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:06.742195    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:11.744521    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:11.744671    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:11.755902    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:11.755974    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:11.766444    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:11.766514    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:11.777228    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:11.777286    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:11.787564    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:11.787628    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:11.798015    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:11.798086    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:11.808732    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:11.808793    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:11.824593    6239 logs.go:276] 0 containers: []
	W0906 12:28:11.824606    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:11.824656    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:11.835014    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:11.835034    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:11.835039    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:11.847235    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:11.847249    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:11.870285    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:11.870296    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:11.883692    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:11.883702    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:11.922660    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:11.922671    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:11.934106    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:11.934118    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:11.948803    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:11.948816    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:11.960535    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:11.960546    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:11.995572    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:11.995583    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:12.010071    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:12.010082    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:12.049115    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:12.049127    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:12.062079    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:12.062093    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:12.077159    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:12.077173    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:12.088842    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:12.088853    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:12.107028    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:12.107039    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:12.119874    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:12.119884    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:12.124511    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:12.124520    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:14.640599    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:19.641050    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:19.641151    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:19.652823    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:19.652905    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:19.664808    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:19.664892    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:19.677317    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:19.677389    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:19.689762    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:19.689854    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:19.702147    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:19.702217    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:19.713763    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:19.713848    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:19.726268    6239 logs.go:276] 0 containers: []
	W0906 12:28:19.726281    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:19.726341    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:19.737151    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:19.737169    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:19.737176    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:19.762459    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:19.762470    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:19.799989    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:19.800005    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:19.834562    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:19.834575    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:19.847946    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:19.847957    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:19.867637    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:19.867650    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:19.879113    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:19.879125    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:19.893691    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:19.893703    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:19.909277    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:19.909286    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:19.921100    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:19.921114    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:19.940097    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:19.940108    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:19.944422    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:19.944431    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:19.959200    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:19.959212    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:19.996471    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:19.996485    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:20.013598    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:20.013608    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:20.025278    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:20.025289    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:20.036970    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:20.036984    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:22.552781    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:27.555060    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:27.555363    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:27.588969    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:27.589087    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:27.607224    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:27.607303    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:27.622646    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:27.622722    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:27.634616    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:27.634691    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:27.646796    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:27.646865    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:27.658953    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:27.659014    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:27.669420    6239 logs.go:276] 0 containers: []
	W0906 12:28:27.669438    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:27.669506    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:27.680235    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:27.680260    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:27.680265    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:27.719240    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:27.719249    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:27.733065    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:27.733077    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:27.771353    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:27.771365    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:27.786001    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:27.786011    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:27.799022    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:27.799033    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:27.834237    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:27.834249    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:27.846984    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:27.846994    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:27.858420    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:27.858432    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:27.869733    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:27.869744    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:27.886824    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:27.886837    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:27.891249    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:27.891257    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:27.902967    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:27.902980    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:27.915512    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:27.915523    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:27.927179    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:27.927193    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:27.941572    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:27.941583    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:27.957511    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:27.957520    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:30.481853    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:35.484124    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:35.484229    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:35.497336    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:35.497407    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:35.507483    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:35.507555    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:35.517816    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:35.517889    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:35.531340    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:35.531411    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:35.543604    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:35.543679    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:35.554306    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:35.554379    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:35.564469    6239 logs.go:276] 0 containers: []
	W0906 12:28:35.564481    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:35.564538    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:35.575862    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:35.575882    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:35.575888    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:35.593700    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:35.593710    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:35.604979    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:35.604990    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:35.644759    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:35.644782    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:35.656311    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:35.656326    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:35.695302    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:35.695318    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:35.723789    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:35.723812    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:35.748034    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:35.748048    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:35.760279    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:35.760290    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:35.795928    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:35.795944    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:35.811325    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:35.811339    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:35.822909    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:35.822923    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:35.836318    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:35.836333    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:35.853073    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:35.853085    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:35.871228    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:35.871241    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:35.895543    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:35.895561    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:35.900111    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:35.900118    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:38.414388    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:43.415048    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:43.415285    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:28:43.436154    6239 logs.go:276] 2 containers: [d811d5223e77 d586e13d97c8]
	I0906 12:28:43.436255    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:28:43.451360    6239 logs.go:276] 2 containers: [2358e3a4b0ff b31953704fbe]
	I0906 12:28:43.451452    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:28:43.463733    6239 logs.go:276] 1 containers: [9afa1a37dfea]
	I0906 12:28:43.463801    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:28:43.474754    6239 logs.go:276] 2 containers: [fd2a6c1766ef c859fcd79335]
	I0906 12:28:43.474817    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:28:43.485359    6239 logs.go:276] 1 containers: [d7b30c403020]
	I0906 12:28:43.485429    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:28:43.496549    6239 logs.go:276] 2 containers: [b85efe13e663 6c0684138801]
	I0906 12:28:43.496611    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:28:43.510656    6239 logs.go:276] 0 containers: []
	W0906 12:28:43.510668    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:28:43.510719    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:28:43.522188    6239 logs.go:276] 2 containers: [b0fa8563f999 8af6381b4963]
	I0906 12:28:43.522212    6239 logs.go:123] Gathering logs for coredns [9afa1a37dfea] ...
	I0906 12:28:43.522217    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9afa1a37dfea"
	I0906 12:28:43.533938    6239 logs.go:123] Gathering logs for kube-scheduler [fd2a6c1766ef] ...
	I0906 12:28:43.533949    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd2a6c1766ef"
	I0906 12:28:43.546415    6239 logs.go:123] Gathering logs for storage-provisioner [b0fa8563f999] ...
	I0906 12:28:43.546426    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fa8563f999"
	I0906 12:28:43.558717    6239 logs.go:123] Gathering logs for storage-provisioner [8af6381b4963] ...
	I0906 12:28:43.558728    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af6381b4963"
	I0906 12:28:43.569663    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:28:43.569676    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:28:43.591148    6239 logs.go:123] Gathering logs for etcd [b31953704fbe] ...
	I0906 12:28:43.591158    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b31953704fbe"
	I0906 12:28:43.605419    6239 logs.go:123] Gathering logs for kube-scheduler [c859fcd79335] ...
	I0906 12:28:43.605431    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c859fcd79335"
	I0906 12:28:43.627626    6239 logs.go:123] Gathering logs for kube-controller-manager [b85efe13e663] ...
	I0906 12:28:43.627637    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b85efe13e663"
	I0906 12:28:43.645441    6239 logs.go:123] Gathering logs for kube-controller-manager [6c0684138801] ...
	I0906 12:28:43.645452    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c0684138801"
	I0906 12:28:43.658137    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:28:43.658149    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:28:43.662161    6239 logs.go:123] Gathering logs for kube-apiserver [d811d5223e77] ...
	I0906 12:28:43.662171    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d811d5223e77"
	I0906 12:28:43.676082    6239 logs.go:123] Gathering logs for kube-apiserver [d586e13d97c8] ...
	I0906 12:28:43.676095    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d586e13d97c8"
	I0906 12:28:43.714506    6239 logs.go:123] Gathering logs for etcd [2358e3a4b0ff] ...
	I0906 12:28:43.714519    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2358e3a4b0ff"
	I0906 12:28:43.730940    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:28:43.730950    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:28:43.770616    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:28:43.770626    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:28:43.805198    6239 logs.go:123] Gathering logs for kube-proxy [d7b30c403020] ...
	I0906 12:28:43.805213    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7b30c403020"
	I0906 12:28:43.817107    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:28:43.817120    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:28:46.329316    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:51.331581    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0906 12:28:51.331632    6239 kubeadm.go:597] duration metric: took 4m3.952686542s to restartPrimaryControlPlane
	W0906 12:28:51.331673    6239 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 12:28:51.331689    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0906 12:28:52.339634    6239 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.007937375s)
	I0906 12:28:52.339697    6239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:28:52.345165    6239 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:28:52.348064    6239 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:28:52.350768    6239 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:28:52.350775    6239 kubeadm.go:157] found existing configuration files:
	
	I0906 12:28:52.350798    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/admin.conf
	I0906 12:28:52.353135    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:28:52.353154    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 12:28:52.356091    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/kubelet.conf
	I0906 12:28:52.359312    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:28:52.359338    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 12:28:52.362301    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/controller-manager.conf
	I0906 12:28:52.364811    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:28:52.364831    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 12:28:52.368065    6239 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/scheduler.conf
	I0906 12:28:52.371188    6239 kubeadm.go:163] "https://control-plane.minikube.internal:50331" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50331 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:28:52.371209    6239 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 12:28:52.373804    6239 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 12:28:52.390936    6239 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0906 12:28:52.391066    6239 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 12:28:52.442154    6239 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 12:28:52.442208    6239 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 12:28:52.442274    6239 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 12:28:52.493012    6239 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 12:28:52.497246    6239 out.go:235]   - Generating certificates and keys ...
	I0906 12:28:52.497288    6239 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 12:28:52.497321    6239 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 12:28:52.497361    6239 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 12:28:52.497393    6239 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 12:28:52.497434    6239 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 12:28:52.497464    6239 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 12:28:52.497495    6239 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 12:28:52.497529    6239 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 12:28:52.497572    6239 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 12:28:52.497614    6239 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 12:28:52.497633    6239 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 12:28:52.497664    6239 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 12:28:52.653103    6239 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 12:28:52.812821    6239 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 12:28:52.875197    6239 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 12:28:53.197904    6239 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 12:28:53.227852    6239 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 12:28:53.228234    6239 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 12:28:53.228328    6239 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 12:28:53.313372    6239 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 12:28:53.317584    6239 out.go:235]   - Booting up control plane ...
	I0906 12:28:53.317632    6239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 12:28:53.317673    6239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 12:28:53.317710    6239 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 12:28:53.317762    6239 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 12:28:53.317868    6239 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 12:28:58.317133    6239 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001704 seconds
	I0906 12:28:58.317201    6239 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 12:28:58.321265    6239 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 12:28:58.834724    6239 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 12:28:58.834841    6239 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-236000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 12:28:59.345158    6239 kubeadm.go:310] [bootstrap-token] Using token: im3wc3.8qcj48hgtkbbi7sm
	I0906 12:28:59.348935    6239 out.go:235]   - Configuring RBAC rules ...
	I0906 12:28:59.348989    6239 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 12:28:59.349032    6239 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 12:28:59.355694    6239 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 12:28:59.356323    6239 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 12:28:59.357269    6239 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 12:28:59.358231    6239 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 12:28:59.361308    6239 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 12:28:59.543231    6239 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 12:28:59.748980    6239 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 12:28:59.749581    6239 kubeadm.go:310] 
	I0906 12:28:59.749612    6239 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 12:28:59.749615    6239 kubeadm.go:310] 
	I0906 12:28:59.749655    6239 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 12:28:59.749658    6239 kubeadm.go:310] 
	I0906 12:28:59.749677    6239 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 12:28:59.749708    6239 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 12:28:59.749731    6239 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 12:28:59.749733    6239 kubeadm.go:310] 
	I0906 12:28:59.749788    6239 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 12:28:59.749793    6239 kubeadm.go:310] 
	I0906 12:28:59.749820    6239 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 12:28:59.749823    6239 kubeadm.go:310] 
	I0906 12:28:59.749846    6239 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 12:28:59.749881    6239 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 12:28:59.749916    6239 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 12:28:59.749919    6239 kubeadm.go:310] 
	I0906 12:28:59.749965    6239 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 12:28:59.750008    6239 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 12:28:59.750012    6239 kubeadm.go:310] 
	I0906 12:28:59.750053    6239 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token im3wc3.8qcj48hgtkbbi7sm \
	I0906 12:28:59.750113    6239 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59dd8c6c8a0580995b4e71517efb0052c1c9fa2a3d3304f8b7b5bc84a0bff0c5 \
	I0906 12:28:59.750124    6239 kubeadm.go:310] 	--control-plane 
	I0906 12:28:59.750128    6239 kubeadm.go:310] 
	I0906 12:28:59.750174    6239 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 12:28:59.750179    6239 kubeadm.go:310] 
	I0906 12:28:59.750223    6239 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token im3wc3.8qcj48hgtkbbi7sm \
	I0906 12:28:59.750273    6239 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59dd8c6c8a0580995b4e71517efb0052c1c9fa2a3d3304f8b7b5bc84a0bff0c5 
	I0906 12:28:59.750356    6239 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 12:28:59.750364    6239 cni.go:84] Creating CNI manager for ""
	I0906 12:28:59.750373    6239 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:28:59.754790    6239 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 12:28:59.761781    6239 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 12:28:59.764678    6239 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 12:28:59.772060    6239 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 12:28:59.772149    6239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-236000 minikube.k8s.io/updated_at=2024_09_06T12_28_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=stopped-upgrade-236000 minikube.k8s.io/primary=true
	I0906 12:28:59.772173    6239 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 12:28:59.782331    6239 ops.go:34] apiserver oom_adj: -16
	I0906 12:28:59.820613    6239 kubeadm.go:1113] duration metric: took 48.496208ms to wait for elevateKubeSystemPrivileges
	I0906 12:28:59.820722    6239 kubeadm.go:394] duration metric: took 4m12.456970333s to StartCluster
	I0906 12:28:59.820734    6239 settings.go:142] acquiring lock: {Name:mk12afd771d0c660db2e89d96a6968c1a28fb2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:59.820813    6239 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:28:59.821252    6239 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/kubeconfig: {Name:mkb103f2b581179fd959f22a1dc4c9c6720f9366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:28:59.821437    6239 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:28:59.821534    6239 config.go:182] Loaded profile config "stopped-upgrade-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0906 12:28:59.821493    6239 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:28:59.821550    6239 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-236000"
	I0906 12:28:59.821565    6239 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-236000"
	W0906 12:28:59.821571    6239 addons.go:243] addon storage-provisioner should already be in state true
	I0906 12:28:59.821583    6239 host.go:66] Checking if "stopped-upgrade-236000" exists ...
	I0906 12:28:59.821569    6239 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-236000"
	I0906 12:28:59.821600    6239 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-236000"
	I0906 12:28:59.822584    6239 kapi.go:59] client config for stopped-upgrade-236000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/stopped-upgrade-236000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-2143/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10286bf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:28:59.822701    6239 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-236000"
	W0906 12:28:59.822705    6239 addons.go:243] addon default-storageclass should already be in state true
	I0906 12:28:59.822712    6239 host.go:66] Checking if "stopped-upgrade-236000" exists ...
	I0906 12:28:59.825795    6239 out.go:177] * Verifying Kubernetes components...
	I0906 12:28:59.826260    6239 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 12:28:59.829858    6239 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 12:28:59.829865    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	I0906 12:28:59.833783    6239 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:28:59.837814    6239 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:28:59.841785    6239 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:28:59.841792    6239 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 12:28:59.841800    6239 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/stopped-upgrade-236000/id_rsa Username:docker}
	I0906 12:28:59.922639    6239 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:28:59.927857    6239 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:28:59.927906    6239 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:28:59.932402    6239 api_server.go:72] duration metric: took 110.953ms to wait for apiserver process to appear ...
	I0906 12:28:59.932410    6239 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:28:59.932417    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:28:59.962406    6239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 12:28:59.981356    6239 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 12:29:00.294977    6239 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:29:00.294989    6239 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:29:04.934466    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:04.934565    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:09.935209    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:09.935239    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:14.935651    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:14.935695    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:19.936280    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:19.936313    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:24.937019    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:24.937053    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:29.937987    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:29.938014    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0906 12:29:30.297062    6239 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0906 12:29:30.301231    6239 out.go:177] * Enabled addons: storage-provisioner
	I0906 12:29:30.309185    6239 addons.go:510] duration metric: took 30.48794475s for enable addons: enabled=[storage-provisioner]
	I0906 12:29:34.939207    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:34.939243    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:39.940722    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:39.940763    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:44.941243    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:44.941283    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:49.943393    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:49.943445    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:54.944194    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:54.944237    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:29:59.946460    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:29:59.946586    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:29:59.965682    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:29:59.965761    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:29:59.977302    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:29:59.977371    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:29:59.987624    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:29:59.987693    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:29:59.997921    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:29:59.997990    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:00.008105    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:00.008172    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:00.018409    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:00.018473    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:00.028561    6239 logs.go:276] 0 containers: []
	W0906 12:30:00.028573    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:00.028632    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:00.039661    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:00.039677    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:00.039683    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:00.044573    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:00.044580    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:00.079890    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:00.079901    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:00.094524    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:00.094534    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:00.119355    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:00.119369    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:00.137594    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:00.137609    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:00.149161    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:00.149174    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:00.166800    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:00.166810    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:00.178425    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:00.178435    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:00.211852    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:00.211861    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:00.226113    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:00.226123    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:00.237936    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:00.237952    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:00.249054    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:00.249066    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:02.762473    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:07.764190    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:07.764358    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:07.778801    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:07.778880    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:07.790143    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:07.790210    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:07.800764    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:07.800836    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:07.810926    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:07.810995    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:07.821363    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:07.821429    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:07.838382    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:07.838453    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:07.848422    6239 logs.go:276] 0 containers: []
	W0906 12:30:07.848433    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:07.848489    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:07.863671    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:07.863685    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:07.863690    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:07.875170    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:07.875183    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:07.886628    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:07.886638    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:07.901637    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:07.901648    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:07.925232    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:07.925242    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:07.959271    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:07.959288    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:07.995603    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:07.995617    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:08.009909    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:08.009921    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:08.021923    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:08.021935    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:08.039983    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:08.039996    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:08.051703    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:08.051714    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:08.063210    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:08.063223    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:08.067475    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:08.067485    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:10.589311    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:15.591778    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:15.592149    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:15.628294    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:15.628404    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:15.654314    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:15.654400    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:15.667308    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:15.667377    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:15.679164    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:15.679233    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:15.690040    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:15.690109    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:15.700517    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:15.700577    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:15.710686    6239 logs.go:276] 0 containers: []
	W0906 12:30:15.710696    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:15.710749    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:15.726355    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:15.726370    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:15.726377    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:15.737866    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:15.737882    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:15.749689    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:15.749701    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:15.783815    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:15.783827    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:15.797737    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:15.797748    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:15.811112    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:15.811122    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:15.822807    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:15.822820    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:15.842020    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:15.842030    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:15.868771    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:15.868784    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:15.904087    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:15.904095    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:15.908452    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:15.908460    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:15.926544    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:15.926555    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:15.938468    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:15.938485    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:18.455621    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:23.457945    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:23.458139    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:23.476728    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:23.476813    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:23.495446    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:23.495525    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:23.513505    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:23.513560    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:23.524267    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:23.524336    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:23.534946    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:23.535015    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:23.545195    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:23.545258    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:23.555072    6239 logs.go:276] 0 containers: []
	W0906 12:30:23.555084    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:23.555143    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:23.569997    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:23.570012    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:23.570016    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:23.574508    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:23.574516    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:23.594074    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:23.594085    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:23.609395    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:23.609405    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:23.634982    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:23.634995    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:23.648757    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:23.648769    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:23.666083    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:23.666094    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:23.681722    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:23.681732    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:23.716899    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:23.716912    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:23.752364    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:23.752375    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:23.767243    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:23.767257    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:23.779381    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:23.779395    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:23.799739    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:23.799753    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:26.313491    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:31.315761    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:31.316106    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:31.363801    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:31.363899    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:31.377916    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:31.377986    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:31.389803    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:31.389871    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:31.400553    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:31.400619    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:31.412839    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:31.412909    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:31.423436    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:31.423501    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:31.433992    6239 logs.go:276] 0 containers: []
	W0906 12:30:31.434003    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:31.434057    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:31.447413    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:31.447427    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:31.447432    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:31.459021    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:31.459034    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:31.482654    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:31.482665    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:31.494735    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:31.494745    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:31.529089    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:31.529099    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:31.546817    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:31.546830    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:31.565144    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:31.565157    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:31.577110    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:31.577123    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:31.589102    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:31.589116    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:31.606511    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:31.606521    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:31.610856    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:31.610867    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:31.650872    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:31.650884    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:31.665131    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:31.665142    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:34.178781    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:39.181062    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:39.181463    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:39.211720    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:39.211839    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:39.230143    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:39.230241    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:39.243953    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:39.244024    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:39.256361    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:39.256422    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:39.266803    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:39.266875    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:39.277843    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:39.277912    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:39.287828    6239 logs.go:276] 0 containers: []
	W0906 12:30:39.287843    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:39.287900    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:39.298275    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:39.298292    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:39.298296    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:39.331696    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:39.331708    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:39.343617    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:39.343628    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:39.355647    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:39.355658    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:39.368580    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:39.368590    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:39.393729    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:39.393745    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:39.427394    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:39.427403    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:39.431938    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:39.431945    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:39.446398    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:39.446409    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:39.460463    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:39.460474    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:39.475932    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:39.475941    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:39.493782    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:39.493793    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:39.505479    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:39.505493    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:42.019064    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:47.019367    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:47.019640    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:47.044473    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:47.044591    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:47.063122    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:47.063207    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:47.076095    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:47.076169    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:47.087310    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:47.087371    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:47.097767    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:47.097834    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:47.110496    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:47.110556    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:47.120675    6239 logs.go:276] 0 containers: []
	W0906 12:30:47.120688    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:47.120748    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:47.131165    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:47.131183    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:47.131188    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:47.166103    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:47.166117    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:47.179840    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:47.179852    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:47.192589    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:47.192600    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:47.209406    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:47.209418    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:47.220586    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:47.220600    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:47.255383    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:47.255395    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:47.280341    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:47.280351    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:47.292103    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:47.292113    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:47.308085    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:47.308096    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:47.324967    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:47.324977    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:47.348701    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:47.348710    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:47.359839    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:47.359852    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:49.866314    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:30:54.868530    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:30:54.868696    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:30:54.882823    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:30:54.882929    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:30:54.894165    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:30:54.894229    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:30:54.904812    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:30:54.904881    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:30:54.914784    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:30:54.914850    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:30:54.929262    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:30:54.929334    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:30:54.940272    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:30:54.940344    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:30:54.953085    6239 logs.go:276] 0 containers: []
	W0906 12:30:54.953099    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:30:54.953158    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:30:54.963179    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:30:54.963205    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:30:54.963211    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:30:54.997630    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:30:54.997641    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:30:55.012644    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:30:55.012657    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:30:55.023746    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:30:55.023757    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:30:55.035299    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:30:55.035310    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:30:55.046806    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:30:55.046817    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:30:55.059717    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:30:55.059729    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:30:55.084799    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:30:55.084807    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:30:55.118371    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:30:55.118382    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:30:55.122785    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:30:55.122792    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:30:55.143687    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:30:55.143702    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:30:55.158467    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:30:55.158490    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:30:55.170475    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:30:55.170485    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:30:57.689603    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:02.691888    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:02.692075    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:02.711572    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:02.711664    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:02.725938    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:02.726008    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:02.737759    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:31:02.737831    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:02.748988    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:02.749047    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:02.759957    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:02.760038    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:02.770389    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:02.770450    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:02.781073    6239 logs.go:276] 0 containers: []
	W0906 12:31:02.781085    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:02.781137    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:02.798014    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:02.798028    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:02.798034    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:02.815077    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:02.815088    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:02.826901    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:02.826912    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:02.831160    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:02.831169    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:02.868644    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:02.868657    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:02.880203    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:02.880216    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:02.892225    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:02.892240    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:02.912506    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:02.912517    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:02.924072    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:02.924083    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:02.948346    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:02.948367    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:02.960339    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:02.960349    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:02.993800    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:02.993810    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:03.007813    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:03.007822    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:05.525442    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:10.527701    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:10.527866    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:10.540286    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:10.540365    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:10.551357    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:10.551415    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:10.562134    6239 logs.go:276] 2 containers: [ca272fee1149 18590fe1a116]
	I0906 12:31:10.562211    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:10.573079    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:10.573139    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:10.583571    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:10.583642    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:10.593826    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:10.593893    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:10.603822    6239 logs.go:276] 0 containers: []
	W0906 12:31:10.603832    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:10.603887    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:10.616515    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:10.616531    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:10.616535    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:10.652583    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:10.652603    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:10.680468    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:10.680480    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:10.701724    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:10.701738    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:10.726813    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:10.726824    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:10.748255    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:10.748266    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:10.766735    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:10.766749    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:10.778365    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:10.778379    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:10.783155    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:10.783162    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:10.821707    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:10.821720    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:10.836817    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:10.836827    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:10.851168    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:10.851183    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:10.876893    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:10.876915    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:13.396473    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:18.397596    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:18.397835    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:18.422836    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:18.422951    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:18.438965    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:18.439032    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:18.452291    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:18.452358    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:18.463343    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:18.463410    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:18.473929    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:18.474002    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:18.484771    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:18.484839    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:18.494799    6239 logs.go:276] 0 containers: []
	W0906 12:31:18.494810    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:18.494866    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:18.505472    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:18.505489    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:18.505494    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:18.525221    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:18.525231    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:18.536917    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:18.536931    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:18.548484    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:18.548496    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:18.565732    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:18.565742    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:18.600883    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:18.600891    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:18.617851    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:18.617864    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:18.643777    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:18.643788    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:18.655277    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:18.655290    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:18.669384    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:18.669398    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:18.680910    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:18.680921    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:18.685192    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:18.685199    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:18.699465    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:18.699478    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:18.714406    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:18.714422    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:18.731190    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:18.731201    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:21.268432    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:26.269248    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:26.269493    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:26.287078    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:26.287171    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:26.301534    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:26.301603    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:26.312631    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:26.312702    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:26.323280    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:26.323364    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:26.334064    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:26.334131    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:26.348816    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:26.348886    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:26.358772    6239 logs.go:276] 0 containers: []
	W0906 12:31:26.358784    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:26.358836    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:26.369290    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:26.369307    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:26.369314    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:26.384482    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:26.384493    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:26.388714    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:26.388721    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:26.423521    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:26.423536    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:26.435502    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:26.435517    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:26.465616    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:26.465626    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:26.486056    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:26.486064    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:26.522319    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:26.522333    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:26.536160    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:26.536172    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:26.547767    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:26.547780    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:26.565104    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:26.565117    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:26.589965    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:26.589975    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:26.601914    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:26.601927    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:26.618710    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:26.618722    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:26.630151    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:26.630161    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:29.143973    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:34.146694    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:34.147105    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:34.176338    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:34.176465    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:34.194342    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:34.194433    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:34.207676    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:34.207753    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:34.219104    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:34.219168    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:34.229799    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:34.229865    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:34.241357    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:34.241427    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:34.252095    6239 logs.go:276] 0 containers: []
	W0906 12:31:34.252107    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:34.252169    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:34.262289    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:34.262307    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:34.262312    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:34.274788    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:34.274803    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:34.298832    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:34.298840    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:34.333811    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:34.333823    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:34.368414    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:34.368425    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:34.380424    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:34.380435    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:34.384565    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:34.384572    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:34.396306    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:34.396316    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:34.410711    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:34.410722    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:34.425868    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:34.425879    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:34.444241    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:34.444252    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:34.455759    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:34.455770    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:34.473146    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:34.473156    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:34.485184    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:34.485195    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:34.500690    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:34.500701    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:37.014085    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:42.015317    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:42.015426    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:42.026818    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:42.026895    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:42.048683    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:42.048750    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:42.060020    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:42.060098    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:42.070768    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:42.070834    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:42.081011    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:42.081082    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:42.091089    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:42.091154    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:42.101509    6239 logs.go:276] 0 containers: []
	W0906 12:31:42.101521    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:42.101579    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:42.111593    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:42.111610    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:42.111615    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:42.129091    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:42.129101    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:42.152662    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:42.152669    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:42.188088    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:42.188098    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:42.192267    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:42.192275    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:42.204116    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:42.204127    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:42.225667    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:42.225678    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:42.237824    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:42.237834    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:42.280502    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:42.280518    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:42.295195    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:42.295208    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:42.311241    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:42.311252    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:42.325782    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:42.325793    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:42.337388    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:42.337399    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:42.349969    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:42.349981    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:42.361668    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:42.361679    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:44.875723    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:49.877244    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:49.877410    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:49.889284    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:49.889361    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:49.899553    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:49.899621    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:49.910013    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:49.910089    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:49.920259    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:49.920333    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:49.930595    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:49.930664    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:49.944871    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:49.944940    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:49.958932    6239 logs.go:276] 0 containers: []
	W0906 12:31:49.958944    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:49.958996    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:49.969704    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:49.969723    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:49.969728    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:50.004615    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:50.004626    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:31:50.038398    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:50.038414    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:50.052201    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:50.052211    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:50.065492    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:50.065504    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:50.083511    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:50.083521    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:50.087932    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:50.087944    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:50.105047    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:50.105059    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:50.128897    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:50.128905    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:50.140315    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:50.140328    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:50.155319    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:50.155329    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:50.167716    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:50.167726    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:50.184779    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:50.184789    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:50.196476    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:50.196487    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:50.208427    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:50.208439    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:52.722375    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:31:57.724687    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:31:57.724851    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:31:57.736597    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:31:57.736669    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:31:57.747656    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:31:57.747722    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:31:57.758706    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:31:57.758779    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:31:57.769080    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:31:57.769144    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:31:57.786277    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:31:57.786352    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:31:57.796683    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:31:57.796749    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:31:57.806581    6239 logs.go:276] 0 containers: []
	W0906 12:31:57.806595    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:31:57.806650    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:31:57.817430    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:31:57.817451    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:31:57.817456    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:31:57.832781    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:31:57.832794    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:31:57.857535    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:31:57.857549    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:31:57.869482    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:31:57.869492    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:31:57.880861    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:31:57.880875    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:31:57.895168    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:31:57.895181    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:31:57.911692    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:31:57.911703    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:31:57.926037    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:31:57.926049    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:31:57.943797    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:31:57.943808    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:31:57.948006    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:31:57.948013    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:31:57.964001    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:31:57.964014    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:31:57.975773    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:31:57.975784    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:31:57.988049    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:31:57.988060    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:31:57.999577    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:31:57.999590    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:31:58.035374    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:31:58.035383    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:00.573941    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:05.576212    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:05.576396    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:05.589170    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:32:05.589250    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:05.599951    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:32:05.600019    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:05.612477    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:32:05.612554    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:05.623441    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:32:05.623513    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:05.634155    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:32:05.634220    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:05.648400    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:32:05.648474    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:05.664889    6239 logs.go:276] 0 containers: []
	W0906 12:32:05.664903    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:05.664963    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:05.675877    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:32:05.675897    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:05.675902    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:05.710782    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:32:05.710793    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:32:05.722252    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:32:05.722265    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:32:05.737034    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:32:05.737045    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:32:05.749334    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:32:05.749349    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:32:05.761084    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:32:05.761095    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:32:05.778800    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:32:05.778811    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:32:05.792880    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:32:05.792891    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:32:05.808090    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:32:05.808102    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:32:05.825474    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:05.825489    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:05.830240    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:05.830248    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:05.867128    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:32:05.867141    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:32:05.879318    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:32:05.879330    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:32:05.890790    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:05.890801    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:05.916213    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:32:05.916227    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:08.430059    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:13.431182    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:13.431321    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:13.442598    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:32:13.442669    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:13.454579    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:32:13.454648    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:13.465435    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:32:13.465507    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:13.476019    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:32:13.476087    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:13.486633    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:32:13.486698    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:13.497287    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:32:13.497353    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:13.507424    6239 logs.go:276] 0 containers: []
	W0906 12:32:13.507436    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:13.507497    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:13.517083    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:32:13.517099    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:13.517104    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:13.552227    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:32:13.552238    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:32:13.570624    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:32:13.570635    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:32:13.582507    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:13.582517    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:13.606549    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:32:13.606561    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:32:13.623600    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:32:13.623614    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:32:13.635122    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:32:13.635134    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:32:13.649360    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:32:13.649370    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:32:13.661446    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:32:13.661458    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:32:13.679235    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:13.679245    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:13.683713    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:32:13.683719    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:32:13.708919    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:32:13.708932    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:32:13.722124    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:32:13.722138    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:32:13.736900    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:13.736912    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:13.778275    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:32:13.778289    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:16.292199    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:21.293189    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:21.293408    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:21.310334    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:32:21.310436    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:21.323586    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:32:21.323656    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:21.340839    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:32:21.340908    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:21.351515    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:32:21.351594    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:21.361701    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:32:21.361787    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:21.372615    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:32:21.372701    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:21.383522    6239 logs.go:276] 0 containers: []
	W0906 12:32:21.383536    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:21.383603    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:21.394537    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:32:21.394556    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:21.394561    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:21.399286    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:21.399294    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:21.433005    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:32:21.433015    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:32:21.451792    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:32:21.451802    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:32:21.463635    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:32:21.463646    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:32:21.475392    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:21.475402    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:21.508637    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:21.508647    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:21.531905    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:32:21.531917    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:32:21.546887    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:32:21.546897    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:32:21.558067    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:32:21.558078    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:32:21.570378    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:32:21.570390    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:32:21.584601    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:32:21.584615    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:21.596296    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:32:21.596310    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:32:21.610807    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:32:21.610820    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:32:21.628817    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:32:21.628828    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:32:24.148956    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:29.150881    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:29.151112    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:29.174212    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:32:29.174310    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:29.190147    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:32:29.190221    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:29.202662    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:32:29.202731    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:29.213228    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:32:29.213293    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:29.224271    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:32:29.224340    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:29.235002    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:32:29.235069    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:29.244801    6239 logs.go:276] 0 containers: []
	W0906 12:32:29.244812    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:29.244863    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:29.255564    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:32:29.255581    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:32:29.255586    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:32:29.267591    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:32:29.267601    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:32:29.281696    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:32:29.281713    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:32:29.294386    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:32:29.294397    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:32:29.307449    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:32:29.307463    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:29.319203    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:29.319212    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:29.354938    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:29.354948    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:29.390601    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:32:29.390613    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:32:29.404963    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:32:29.404974    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:32:29.418380    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:32:29.418391    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:32:29.429877    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:32:29.429888    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:32:29.444250    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:29.444263    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:29.467193    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:29.467199    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:29.471607    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:32:29.471616    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:32:29.483541    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:32:29.483556    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:32:32.001483    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:37.003707    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:37.003846    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:37.016175    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:32:37.016244    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:37.027129    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:32:37.027197    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:37.038261    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:32:37.038327    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:37.048665    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:32:37.048726    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:37.058847    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:32:37.058916    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:37.070404    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:32:37.070469    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:37.080953    6239 logs.go:276] 0 containers: []
	W0906 12:32:37.080963    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:37.081020    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:37.091473    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:32:37.091488    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:32:37.091493    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:32:37.103254    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:37.103264    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:37.107677    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:37.107684    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:37.142646    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:32:37.142659    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:32:37.154472    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:32:37.154482    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:32:37.167311    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:32:37.167325    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:37.178963    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:37.178975    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:37.215107    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:32:37.215119    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:32:37.229644    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:32:37.229657    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:32:37.241069    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:32:37.241082    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:32:37.256060    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:32:37.256072    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:32:37.270805    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:32:37.270815    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:32:37.283048    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:32:37.283059    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:32:37.301071    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:32:37.301080    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:32:37.312471    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:37.312481    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:39.840221    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:44.842382    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:44.842601    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:44.864330    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:32:44.864420    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:44.877805    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:32:44.877883    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:44.889646    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:32:44.889712    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:44.901475    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:32:44.901543    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:44.911960    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:32:44.912032    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:44.921954    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:32:44.922029    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:44.932373    6239 logs.go:276] 0 containers: []
	W0906 12:32:44.932385    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:44.932443    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:44.943110    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:32:44.943131    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:32:44.943136    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:32:44.954894    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:44.954906    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:44.979691    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:44.979698    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:45.014829    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:32:45.014843    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:32:45.030826    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:32:45.030837    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:32:45.043014    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:32:45.043024    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:32:45.061422    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:45.061435    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:45.096549    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:32:45.096558    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:32:45.110201    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:32:45.110215    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:32:45.125104    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:45.125114    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:45.129619    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:32:45.129629    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:32:45.141033    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:32:45.141046    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:45.153220    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:32:45.153234    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:32:45.165359    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:32:45.165373    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:32:45.182382    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:32:45.182396    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:32:47.700680    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:32:52.702927    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:32:52.703074    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0906 12:32:52.717442    6239 logs.go:276] 1 containers: [b169e9cd1ce4]
	I0906 12:32:52.717529    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0906 12:32:52.729776    6239 logs.go:276] 1 containers: [e16ccb7f91d1]
	I0906 12:32:52.729847    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0906 12:32:52.744318    6239 logs.go:276] 4 containers: [4b41919850d7 d691dbe9b652 ca272fee1149 18590fe1a116]
	I0906 12:32:52.744390    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0906 12:32:52.754850    6239 logs.go:276] 1 containers: [51ec5568e0c5]
	I0906 12:32:52.754916    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0906 12:32:52.765918    6239 logs.go:276] 1 containers: [0b4d2c3b6dac]
	I0906 12:32:52.765988    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0906 12:32:52.781574    6239 logs.go:276] 1 containers: [25a7bc5bf847]
	I0906 12:32:52.781646    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0906 12:32:52.792148    6239 logs.go:276] 0 containers: []
	W0906 12:32:52.792161    6239 logs.go:278] No container was found matching "kindnet"
	I0906 12:32:52.792218    6239 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0906 12:32:52.802636    6239 logs.go:276] 1 containers: [0a61f6a721aa]
	I0906 12:32:52.802653    6239 logs.go:123] Gathering logs for coredns [d691dbe9b652] ...
	I0906 12:32:52.802658    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d691dbe9b652"
	I0906 12:32:52.814440    6239 logs.go:123] Gathering logs for coredns [ca272fee1149] ...
	I0906 12:32:52.814451    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca272fee1149"
	I0906 12:32:52.825995    6239 logs.go:123] Gathering logs for dmesg ...
	I0906 12:32:52.826005    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 12:32:52.830344    6239 logs.go:123] Gathering logs for describe nodes ...
	I0906 12:32:52.830352    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 12:32:52.871297    6239 logs.go:123] Gathering logs for etcd [e16ccb7f91d1] ...
	I0906 12:32:52.871308    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e16ccb7f91d1"
	I0906 12:32:52.885298    6239 logs.go:123] Gathering logs for coredns [4b41919850d7] ...
	I0906 12:32:52.885311    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b41919850d7"
	I0906 12:32:52.896861    6239 logs.go:123] Gathering logs for container status ...
	I0906 12:32:52.896873    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 12:32:52.914211    6239 logs.go:123] Gathering logs for kube-apiserver [b169e9cd1ce4] ...
	I0906 12:32:52.914223    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b169e9cd1ce4"
	I0906 12:32:52.928653    6239 logs.go:123] Gathering logs for coredns [18590fe1a116] ...
	I0906 12:32:52.928663    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18590fe1a116"
	I0906 12:32:52.940652    6239 logs.go:123] Gathering logs for kube-scheduler [51ec5568e0c5] ...
	I0906 12:32:52.940662    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51ec5568e0c5"
	I0906 12:32:52.955383    6239 logs.go:123] Gathering logs for kube-proxy [0b4d2c3b6dac] ...
	I0906 12:32:52.955394    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4d2c3b6dac"
	I0906 12:32:52.967696    6239 logs.go:123] Gathering logs for kube-controller-manager [25a7bc5bf847] ...
	I0906 12:32:52.967706    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25a7bc5bf847"
	I0906 12:32:52.984925    6239 logs.go:123] Gathering logs for Docker ...
	I0906 12:32:52.984936    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0906 12:32:53.008011    6239 logs.go:123] Gathering logs for kubelet ...
	I0906 12:32:53.008019    6239 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 12:32:53.044178    6239 logs.go:123] Gathering logs for storage-provisioner [0a61f6a721aa] ...
	I0906 12:32:53.044189    6239 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a61f6a721aa"
	I0906 12:32:55.557404    6239 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0906 12:33:00.557732    6239 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 12:33:00.571390    6239 out.go:201] 
	W0906 12:33:00.574726    6239 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0906 12:33:00.574761    6239 out.go:270] * 
	* 
	W0906 12:33:00.577329    6239 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:33:00.586495    6239 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-236000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (611.73s)

                                                
                                    
x
+
TestPause/serial/Start (9.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-153000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-153000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.857205167s)

                                                
                                                
-- stdout --
	* [pause-153000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-153000" primary control-plane node in "pause-153000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-153000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-153000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-153000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-153000 -n pause-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-153000 -n pause-153000: exit status 7 (65.351875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-153000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-889000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-889000 --driver=qemu2 : exit status 80 (9.898712041s)

                                                
                                                
-- stdout --
	* [NoKubernetes-889000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-889000" primary control-plane node in "NoKubernetes-889000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-889000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-889000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-889000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-889000 -n NoKubernetes-889000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-889000 -n NoKubernetes-889000: exit status 7 (62.254584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-889000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-889000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-889000 --no-kubernetes --driver=qemu2 : exit status 80 (5.252193916s)

                                                
                                                
-- stdout --
	* [NoKubernetes-889000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-889000
	* Restarting existing qemu2 VM for "NoKubernetes-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-889000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-889000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-889000 -n NoKubernetes-889000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-889000 -n NoKubernetes-889000: exit status 7 (61.38025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-889000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-889000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-889000 --no-kubernetes --driver=qemu2 : exit status 80 (5.249839583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-889000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-889000
	* Restarting existing qemu2 VM for "NoKubernetes-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-889000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-889000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-889000 -n NoKubernetes-889000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-889000 -n NoKubernetes-889000: exit status 7 (58.74775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-889000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.99s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-889000 --driver=qemu2 
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19576
- KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3050625537/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-889000 --driver=qemu2 : exit status 80 (5.30499875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-889000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-889000
	* Restarting existing qemu2 VM for "NoKubernetes-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-889000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-889000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-889000 -n NoKubernetes-889000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-889000 -n NoKubernetes-889000: exit status 7 (66.332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-889000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.61s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19576
- KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3313178483/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.400409542s)

                                                
                                                
-- stdout --
	* [auto-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-269000" primary control-plane node in "auto-269000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:34:23.177928    7207 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:34:23.178048    7207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:34:23.178053    7207 out.go:358] Setting ErrFile to fd 2...
	I0906 12:34:23.178055    7207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:34:23.178203    7207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:34:23.179475    7207 out.go:352] Setting JSON to false
	I0906 12:34:23.195713    7207 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5633,"bootTime":1725645630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:34:23.195783    7207 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:34:23.203300    7207 out.go:177] * [auto-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:34:23.211501    7207 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:34:23.211553    7207 notify.go:220] Checking for updates...
	I0906 12:34:23.218464    7207 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:34:23.221491    7207 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:34:23.224455    7207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:34:23.227431    7207 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:34:23.230465    7207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:34:23.232352    7207 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:34:23.232419    7207 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:34:23.232464    7207 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:34:23.236406    7207 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:34:23.243342    7207 start.go:297] selected driver: qemu2
	I0906 12:34:23.243351    7207 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:34:23.243359    7207 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:34:23.245632    7207 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:34:23.248517    7207 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:34:23.251591    7207 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:34:23.251638    7207 cni.go:84] Creating CNI manager for ""
	I0906 12:34:23.251651    7207 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:34:23.251661    7207 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:34:23.251698    7207 start.go:340] cluster config:
	{Name:auto-269000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/
run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:34:23.255356    7207 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:34:23.262429    7207 out.go:177] * Starting "auto-269000" primary control-plane node in "auto-269000" cluster
	I0906 12:34:23.266511    7207 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:34:23.266529    7207 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:34:23.266539    7207 cache.go:56] Caching tarball of preloaded images
	I0906 12:34:23.266623    7207 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:34:23.266630    7207 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:34:23.266705    7207 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/auto-269000/config.json ...
	I0906 12:34:23.266721    7207 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/auto-269000/config.json: {Name:mk7cb72e5fddd308630ec8b8ae54a9660980167d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:34:23.267019    7207 start.go:360] acquireMachinesLock for auto-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:34:23.267055    7207 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "auto-269000"
	I0906 12:34:23.267067    7207 start.go:93] Provisioning new machine with config: &{Name:auto-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-269000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:34:23.267099    7207 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:34:23.275506    7207 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:34:23.293709    7207 start.go:159] libmachine.API.Create for "auto-269000" (driver="qemu2")
	I0906 12:34:23.293741    7207 client.go:168] LocalClient.Create starting
	I0906 12:34:23.293811    7207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:34:23.293840    7207 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:23.293850    7207 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:23.293890    7207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:34:23.293914    7207 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:23.293925    7207 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:23.294278    7207 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:34:23.456009    7207 main.go:141] libmachine: Creating SSH key...
	I0906 12:34:23.613855    7207 main.go:141] libmachine: Creating Disk image...
	I0906 12:34:23.613861    7207 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:34:23.614072    7207 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2
	I0906 12:34:23.623654    7207 main.go:141] libmachine: STDOUT: 
	I0906 12:34:23.623670    7207 main.go:141] libmachine: STDERR: 
	I0906 12:34:23.623714    7207 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2 +20000M
	I0906 12:34:23.631726    7207 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:34:23.631738    7207 main.go:141] libmachine: STDERR: 
	I0906 12:34:23.631761    7207 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2
	I0906 12:34:23.631766    7207 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:34:23.631778    7207 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:34:23.631802    7207 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:8f:0c:25:bc:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2
	I0906 12:34:23.633460    7207 main.go:141] libmachine: STDOUT: 
	I0906 12:34:23.633475    7207 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:34:23.633491    7207 client.go:171] duration metric: took 339.747541ms to LocalClient.Create
	I0906 12:34:25.635674    7207 start.go:128] duration metric: took 2.368570333s to createHost
	I0906 12:34:25.635728    7207 start.go:83] releasing machines lock for "auto-269000", held for 2.368680958s
	W0906 12:34:25.635824    7207 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:25.646188    7207 out.go:177] * Deleting "auto-269000" in qemu2 ...
	W0906 12:34:25.678632    7207 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:25.678662    7207 start.go:729] Will try again in 5 seconds ...
	I0906 12:34:30.680796    7207 start.go:360] acquireMachinesLock for auto-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:34:30.681247    7207 start.go:364] duration metric: took 328.791µs to acquireMachinesLock for "auto-269000"
	I0906 12:34:30.681355    7207 start.go:93] Provisioning new machine with config: &{Name:auto-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-269000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:34:30.681694    7207 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:34:30.692358    7207 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:34:30.741272    7207 start.go:159] libmachine.API.Create for "auto-269000" (driver="qemu2")
	I0906 12:34:30.741327    7207 client.go:168] LocalClient.Create starting
	I0906 12:34:30.741440    7207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:34:30.741511    7207 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:30.741527    7207 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:30.741589    7207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:34:30.741632    7207 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:30.741647    7207 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:30.742158    7207 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:34:30.914536    7207 main.go:141] libmachine: Creating SSH key...
	I0906 12:34:31.481709    7207 main.go:141] libmachine: Creating Disk image...
	I0906 12:34:31.481721    7207 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:34:31.481950    7207 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2
	I0906 12:34:31.491468    7207 main.go:141] libmachine: STDOUT: 
	I0906 12:34:31.491487    7207 main.go:141] libmachine: STDERR: 
	I0906 12:34:31.491551    7207 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2 +20000M
	I0906 12:34:31.499489    7207 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:34:31.499508    7207 main.go:141] libmachine: STDERR: 
	I0906 12:34:31.499525    7207 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2
	I0906 12:34:31.499530    7207 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:34:31.499544    7207 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:34:31.499573    7207 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ff:3d:eb:79:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/auto-269000/disk.qcow2
	I0906 12:34:31.501172    7207 main.go:141] libmachine: STDOUT: 
	I0906 12:34:31.501189    7207 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:34:31.501202    7207 client.go:171] duration metric: took 759.875083ms to LocalClient.Create
	I0906 12:34:33.503377    7207 start.go:128] duration metric: took 2.821663083s to createHost
	I0906 12:34:33.503427    7207 start.go:83] releasing machines lock for "auto-269000", held for 2.822176334s
	W0906 12:34:33.503776    7207 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:33.518292    7207 out.go:201] 
	W0906 12:34:33.522406    7207 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:34:33.522436    7207 out.go:270] * 
	* 
	W0906 12:34:33.524945    7207 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:34:33.534401    7207 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.828301s)

                                                
                                                
-- stdout --
	* [kindnet-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-269000" primary control-plane node in "kindnet-269000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:34:35.703895    7318 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:34:35.704038    7318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:34:35.704042    7318 out.go:358] Setting ErrFile to fd 2...
	I0906 12:34:35.704044    7318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:34:35.704179    7318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:34:35.705349    7318 out.go:352] Setting JSON to false
	I0906 12:34:35.721862    7318 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5645,"bootTime":1725645630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:34:35.721936    7318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:34:35.726408    7318 out.go:177] * [kindnet-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:34:35.730355    7318 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:34:35.730436    7318 notify.go:220] Checking for updates...
	I0906 12:34:35.737393    7318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:34:35.740452    7318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:34:35.743446    7318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:34:35.746375    7318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:34:35.749413    7318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:34:35.752821    7318 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:34:35.752895    7318 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:34:35.752945    7318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:34:35.757381    7318 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:34:35.764398    7318 start.go:297] selected driver: qemu2
	I0906 12:34:35.764408    7318 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:34:35.764414    7318 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:34:35.766685    7318 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:34:35.769490    7318 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:34:35.772536    7318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:34:35.772554    7318 cni.go:84] Creating CNI manager for "kindnet"
	I0906 12:34:35.772562    7318 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 12:34:35.772596    7318 start.go:340] cluster config:
	{Name:kindnet-269000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVM
netPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:34:35.776379    7318 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:34:35.783438    7318 out.go:177] * Starting "kindnet-269000" primary control-plane node in "kindnet-269000" cluster
	I0906 12:34:35.787404    7318 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:34:35.787419    7318 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:34:35.787431    7318 cache.go:56] Caching tarball of preloaded images
	I0906 12:34:35.787499    7318 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:34:35.787505    7318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:34:35.787582    7318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/kindnet-269000/config.json ...
	I0906 12:34:35.787598    7318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/kindnet-269000/config.json: {Name:mk1f82b56b4b9db6618fad704d092ab32709043d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:34:35.787943    7318 start.go:360] acquireMachinesLock for kindnet-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:34:35.787977    7318 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "kindnet-269000"
	I0906 12:34:35.787992    7318 start.go:93] Provisioning new machine with config: &{Name:kindnet-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-269000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:34:35.788024    7318 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:34:35.796367    7318 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:34:35.814120    7318 start.go:159] libmachine.API.Create for "kindnet-269000" (driver="qemu2")
	I0906 12:34:35.814155    7318 client.go:168] LocalClient.Create starting
	I0906 12:34:35.814224    7318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:34:35.814254    7318 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:35.814263    7318 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:35.814301    7318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:34:35.814328    7318 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:35.814339    7318 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:35.814762    7318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:34:35.976150    7318 main.go:141] libmachine: Creating SSH key...
	I0906 12:34:36.095009    7318 main.go:141] libmachine: Creating Disk image...
	I0906 12:34:36.095015    7318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:34:36.095218    7318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2
	I0906 12:34:36.104870    7318 main.go:141] libmachine: STDOUT: 
	I0906 12:34:36.104887    7318 main.go:141] libmachine: STDERR: 
	I0906 12:34:36.104930    7318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2 +20000M
	I0906 12:34:36.112824    7318 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:34:36.112840    7318 main.go:141] libmachine: STDERR: 
	I0906 12:34:36.112861    7318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2
	I0906 12:34:36.112869    7318 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:34:36.112882    7318 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:34:36.112910    7318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:ab:f9:1a:a1:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2
	I0906 12:34:36.114616    7318 main.go:141] libmachine: STDOUT: 
	I0906 12:34:36.114631    7318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:34:36.114648    7318 client.go:171] duration metric: took 300.490625ms to LocalClient.Create
	I0906 12:34:38.116837    7318 start.go:128] duration metric: took 2.328809125s to createHost
	I0906 12:34:38.116881    7318 start.go:83] releasing machines lock for "kindnet-269000", held for 2.328911917s
	W0906 12:34:38.116938    7318 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:38.127993    7318 out.go:177] * Deleting "kindnet-269000" in qemu2 ...
	W0906 12:34:38.162451    7318 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:38.162484    7318 start.go:729] Will try again in 5 seconds ...
	I0906 12:34:43.164699    7318 start.go:360] acquireMachinesLock for kindnet-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:34:43.165152    7318 start.go:364] duration metric: took 334.083µs to acquireMachinesLock for "kindnet-269000"
	I0906 12:34:43.165273    7318 start.go:93] Provisioning new machine with config: &{Name:kindnet-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-269000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:34:43.165620    7318 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:34:43.180331    7318 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:34:43.229861    7318 start.go:159] libmachine.API.Create for "kindnet-269000" (driver="qemu2")
	I0906 12:34:43.229912    7318 client.go:168] LocalClient.Create starting
	I0906 12:34:43.230027    7318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:34:43.230087    7318 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:43.230102    7318 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:43.230161    7318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:34:43.230205    7318 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:43.230217    7318 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:43.230894    7318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:34:43.404620    7318 main.go:141] libmachine: Creating SSH key...
	I0906 12:34:43.433038    7318 main.go:141] libmachine: Creating Disk image...
	I0906 12:34:43.433043    7318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:34:43.433253    7318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2
	I0906 12:34:43.442398    7318 main.go:141] libmachine: STDOUT: 
	I0906 12:34:43.442417    7318 main.go:141] libmachine: STDERR: 
	I0906 12:34:43.442479    7318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2 +20000M
	I0906 12:34:43.450330    7318 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:34:43.450344    7318 main.go:141] libmachine: STDERR: 
	I0906 12:34:43.450357    7318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2
	I0906 12:34:43.450361    7318 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:34:43.450373    7318 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:34:43.450397    7318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:20:88:99:1c:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kindnet-269000/disk.qcow2
	I0906 12:34:43.452071    7318 main.go:141] libmachine: STDOUT: 
	I0906 12:34:43.452087    7318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:34:43.452099    7318 client.go:171] duration metric: took 222.18175ms to LocalClient.Create
	I0906 12:34:45.454266    7318 start.go:128] duration metric: took 2.288615375s to createHost
	I0906 12:34:45.454329    7318 start.go:83] releasing machines lock for "kindnet-269000", held for 2.2891685s
	W0906 12:34:45.454724    7318 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:45.470321    7318 out.go:201] 
	W0906 12:34:45.474441    7318 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:34:45.474467    7318 out.go:270] * 
	* 
	W0906 12:34:45.477070    7318 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:34:45.490274    7318 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.757154917s)

                                                
                                                
-- stdout --
	* [calico-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-269000" primary control-plane node in "calico-269000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:34:47.759664    7433 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:34:47.759776    7433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:34:47.759780    7433 out.go:358] Setting ErrFile to fd 2...
	I0906 12:34:47.759782    7433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:34:47.759925    7433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:34:47.760950    7433 out.go:352] Setting JSON to false
	I0906 12:34:47.777095    7433 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5657,"bootTime":1725645630,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:34:47.777168    7433 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:34:47.783584    7433 out.go:177] * [calico-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:34:47.791534    7433 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:34:47.791588    7433 notify.go:220] Checking for updates...
	I0906 12:34:47.796775    7433 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:34:47.799570    7433 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:34:47.802590    7433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:34:47.805547    7433 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:34:47.808515    7433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:34:47.811969    7433 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:34:47.812037    7433 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:34:47.812082    7433 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:34:47.816589    7433 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:34:47.823512    7433 start.go:297] selected driver: qemu2
	I0906 12:34:47.823519    7433 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:34:47.823525    7433 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:34:47.825710    7433 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:34:47.828595    7433 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:34:47.831621    7433 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:34:47.831672    7433 cni.go:84] Creating CNI manager for "calico"
	I0906 12:34:47.831677    7433 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0906 12:34:47.831707    7433 start.go:340] cluster config:
	{Name:calico-269000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnet
Path:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:34:47.835328    7433 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:34:47.842371    7433 out.go:177] * Starting "calico-269000" primary control-plane node in "calico-269000" cluster
	I0906 12:34:47.846589    7433 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:34:47.846606    7433 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:34:47.846618    7433 cache.go:56] Caching tarball of preloaded images
	I0906 12:34:47.846683    7433 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:34:47.846689    7433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:34:47.846768    7433 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/calico-269000/config.json ...
	I0906 12:34:47.846783    7433 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/calico-269000/config.json: {Name:mkf3297f6f9d3e02e6bd83a3dcbffb551e7b765b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:34:47.847019    7433 start.go:360] acquireMachinesLock for calico-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:34:47.847055    7433 start.go:364] duration metric: took 29.584µs to acquireMachinesLock for "calico-269000"
	I0906 12:34:47.847068    7433 start.go:93] Provisioning new machine with config: &{Name:calico-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-269000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:34:47.847100    7433 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:34:47.851406    7433 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:34:47.869348    7433 start.go:159] libmachine.API.Create for "calico-269000" (driver="qemu2")
	I0906 12:34:47.869375    7433 client.go:168] LocalClient.Create starting
	I0906 12:34:47.869433    7433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:34:47.869465    7433 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:47.869475    7433 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:47.869515    7433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:34:47.869539    7433 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:47.869549    7433 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:47.869939    7433 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:34:48.032098    7433 main.go:141] libmachine: Creating SSH key...
	I0906 12:34:48.073107    7433 main.go:141] libmachine: Creating Disk image...
	I0906 12:34:48.073112    7433 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:34:48.073299    7433 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2
	I0906 12:34:48.082312    7433 main.go:141] libmachine: STDOUT: 
	I0906 12:34:48.082329    7433 main.go:141] libmachine: STDERR: 
	I0906 12:34:48.082375    7433 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2 +20000M
	I0906 12:34:48.090313    7433 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:34:48.090338    7433 main.go:141] libmachine: STDERR: 
	I0906 12:34:48.090351    7433 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2
	I0906 12:34:48.090356    7433 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:34:48.090367    7433 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:34:48.090394    7433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:51:ac:92:3e:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2
	I0906 12:34:48.092193    7433 main.go:141] libmachine: STDOUT: 
	I0906 12:34:48.092208    7433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:34:48.092224    7433 client.go:171] duration metric: took 222.846292ms to LocalClient.Create
	I0906 12:34:50.094374    7433 start.go:128] duration metric: took 2.247273041s to createHost
	I0906 12:34:50.094436    7433 start.go:83] releasing machines lock for "calico-269000", held for 2.247388125s
	W0906 12:34:50.094541    7433 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:50.106762    7433 out.go:177] * Deleting "calico-269000" in qemu2 ...
	W0906 12:34:50.137741    7433 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:50.137763    7433 start.go:729] Will try again in 5 seconds ...
	I0906 12:34:55.139926    7433 start.go:360] acquireMachinesLock for calico-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:34:55.140477    7433 start.go:364] duration metric: took 455.5µs to acquireMachinesLock for "calico-269000"
	I0906 12:34:55.140611    7433 start.go:93] Provisioning new machine with config: &{Name:calico-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-269000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:34:55.140845    7433 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:34:55.151502    7433 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:34:55.200430    7433 start.go:159] libmachine.API.Create for "calico-269000" (driver="qemu2")
	I0906 12:34:55.200477    7433 client.go:168] LocalClient.Create starting
	I0906 12:34:55.200589    7433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:34:55.200655    7433 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:55.200683    7433 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:55.200752    7433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:34:55.200796    7433 main.go:141] libmachine: Decoding PEM data...
	I0906 12:34:55.200807    7433 main.go:141] libmachine: Parsing certificate...
	I0906 12:34:55.201315    7433 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:34:55.375603    7433 main.go:141] libmachine: Creating SSH key...
	I0906 12:34:55.424261    7433 main.go:141] libmachine: Creating Disk image...
	I0906 12:34:55.424270    7433 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:34:55.424471    7433 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2
	I0906 12:34:55.433552    7433 main.go:141] libmachine: STDOUT: 
	I0906 12:34:55.433567    7433 main.go:141] libmachine: STDERR: 
	I0906 12:34:55.433614    7433 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2 +20000M
	I0906 12:34:55.441469    7433 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:34:55.441482    7433 main.go:141] libmachine: STDERR: 
	I0906 12:34:55.441501    7433 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2
	I0906 12:34:55.441504    7433 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:34:55.441514    7433 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:34:55.441538    7433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:1e:ed:70:59:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/calico-269000/disk.qcow2
	I0906 12:34:55.443127    7433 main.go:141] libmachine: STDOUT: 
	I0906 12:34:55.443140    7433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:34:55.443152    7433 client.go:171] duration metric: took 242.672167ms to LocalClient.Create
	I0906 12:34:57.445307    7433 start.go:128] duration metric: took 2.304442042s to createHost
	I0906 12:34:57.445393    7433 start.go:83] releasing machines lock for "calico-269000", held for 2.30490575s
	W0906 12:34:57.445877    7433 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:34:57.455403    7433 out.go:201] 
	W0906 12:34:57.464448    7433 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:34:57.464475    7433 out.go:270] * 
	* 
	W0906 12:34:57.467119    7433 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:34:57.475381    7433 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.876925166s)

                                                
                                                
-- stdout --
	* [custom-flannel-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-269000" primary control-plane node in "custom-flannel-269000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:34:59.913840    7553 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:34:59.913996    7553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:34:59.914000    7553 out.go:358] Setting ErrFile to fd 2...
	I0906 12:34:59.914002    7553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:34:59.914117    7553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:34:59.915221    7553 out.go:352] Setting JSON to false
	I0906 12:34:59.931287    7553 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5669,"bootTime":1725645630,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:34:59.931362    7553 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:34:59.938028    7553 out.go:177] * [custom-flannel-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:34:59.944871    7553 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:34:59.944917    7553 notify.go:220] Checking for updates...
	I0906 12:34:59.952822    7553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:34:59.955789    7553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:34:59.958882    7553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:34:59.961841    7553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:34:59.964794    7553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:34:59.968188    7553 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:34:59.968257    7553 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:34:59.968310    7553 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:34:59.972716    7553 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:34:59.979854    7553 start.go:297] selected driver: qemu2
	I0906 12:34:59.979859    7553 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:34:59.979867    7553 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:34:59.982163    7553 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:34:59.984790    7553 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:34:59.987924    7553 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:34:59.987956    7553 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0906 12:34:59.987965    7553 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0906 12:34:59.987997    7553 start.go:340] cluster config:
	{Name:custom-flannel-269000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet
/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:34:59.991732    7553 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:34:59.998802    7553 out.go:177] * Starting "custom-flannel-269000" primary control-plane node in "custom-flannel-269000" cluster
	I0906 12:35:00.002797    7553 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:35:00.002811    7553 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:35:00.002818    7553 cache.go:56] Caching tarball of preloaded images
	I0906 12:35:00.002891    7553 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:35:00.002897    7553 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:35:00.002962    7553 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/custom-flannel-269000/config.json ...
	I0906 12:35:00.002974    7553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/custom-flannel-269000/config.json: {Name:mk7cd6a29c33907958b0cfd59ab1f929a0e17f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:35:00.003227    7553 start.go:360] acquireMachinesLock for custom-flannel-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:35:00.003267    7553 start.go:364] duration metric: took 29.416µs to acquireMachinesLock for "custom-flannel-269000"
	I0906 12:35:00.003280    7553 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flann
el-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:35:00.003315    7553 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:35:00.011854    7553 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:35:00.030007    7553 start.go:159] libmachine.API.Create for "custom-flannel-269000" (driver="qemu2")
	I0906 12:35:00.030038    7553 client.go:168] LocalClient.Create starting
	I0906 12:35:00.030100    7553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:35:00.030131    7553 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:00.030140    7553 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:00.030178    7553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:35:00.030201    7553 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:00.030210    7553 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:00.030596    7553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:35:00.192230    7553 main.go:141] libmachine: Creating SSH key...
	I0906 12:35:00.290412    7553 main.go:141] libmachine: Creating Disk image...
	I0906 12:35:00.290418    7553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:35:00.290617    7553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2
	I0906 12:35:00.300010    7553 main.go:141] libmachine: STDOUT: 
	I0906 12:35:00.300028    7553 main.go:141] libmachine: STDERR: 
	I0906 12:35:00.300071    7553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2 +20000M
	I0906 12:35:00.307983    7553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:35:00.308003    7553 main.go:141] libmachine: STDERR: 
	I0906 12:35:00.308021    7553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2
	I0906 12:35:00.308025    7553 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:35:00.308035    7553 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:35:00.308073    7553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:ca:bc:37:52:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2
	I0906 12:35:00.309676    7553 main.go:141] libmachine: STDOUT: 
	I0906 12:35:00.309694    7553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:35:00.309711    7553 client.go:171] duration metric: took 279.669958ms to LocalClient.Create
	I0906 12:35:02.311894    7553 start.go:128] duration metric: took 2.308574084s to createHost
	I0906 12:35:02.311955    7553 start.go:83] releasing machines lock for "custom-flannel-269000", held for 2.308693208s
	W0906 12:35:02.312045    7553 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:02.329132    7553 out.go:177] * Deleting "custom-flannel-269000" in qemu2 ...
	W0906 12:35:02.358377    7553 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:02.358406    7553 start.go:729] Will try again in 5 seconds ...
	I0906 12:35:07.360602    7553 start.go:360] acquireMachinesLock for custom-flannel-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:35:07.361006    7553 start.go:364] duration metric: took 311.625µs to acquireMachinesLock for "custom-flannel-269000"
	I0906 12:35:07.361099    7553 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flann
el-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:35:07.361352    7553 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:35:07.368331    7553 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:35:07.416255    7553 start.go:159] libmachine.API.Create for "custom-flannel-269000" (driver="qemu2")
	I0906 12:35:07.416310    7553 client.go:168] LocalClient.Create starting
	I0906 12:35:07.416422    7553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:35:07.416491    7553 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:07.416506    7553 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:07.416569    7553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:35:07.416613    7553 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:07.416628    7553 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:07.417211    7553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:35:07.593811    7553 main.go:141] libmachine: Creating SSH key...
	I0906 12:35:07.691738    7553 main.go:141] libmachine: Creating Disk image...
	I0906 12:35:07.691743    7553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:35:07.691905    7553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2
	I0906 12:35:07.701256    7553 main.go:141] libmachine: STDOUT: 
	I0906 12:35:07.701276    7553 main.go:141] libmachine: STDERR: 
	I0906 12:35:07.701319    7553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2 +20000M
	I0906 12:35:07.709307    7553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:35:07.709324    7553 main.go:141] libmachine: STDERR: 
	I0906 12:35:07.709335    7553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2
	I0906 12:35:07.709340    7553 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:35:07.709351    7553 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:35:07.709391    7553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:c2:64:be:2c:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/custom-flannel-269000/disk.qcow2
	I0906 12:35:07.711058    7553 main.go:141] libmachine: STDOUT: 
	I0906 12:35:07.711076    7553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:35:07.711089    7553 client.go:171] duration metric: took 294.775625ms to LocalClient.Create
	I0906 12:35:09.713251    7553 start.go:128] duration metric: took 2.351882917s to createHost
	I0906 12:35:09.713321    7553 start.go:83] releasing machines lock for "custom-flannel-269000", held for 2.352308708s
	W0906 12:35:09.713703    7553 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:09.729319    7553 out.go:201] 
	W0906 12:35:09.733410    7553 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:35:09.733452    7553 out.go:270] * 
	* 
	W0906 12:35:09.736273    7553 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:35:09.749366    7553 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.836993542s)

                                                
                                                
-- stdout --
	* [false-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-269000" primary control-plane node in "false-269000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:35:12.154371    7672 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:35:12.154496    7672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:35:12.154500    7672 out.go:358] Setting ErrFile to fd 2...
	I0906 12:35:12.154502    7672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:35:12.154628    7672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:35:12.155656    7672 out.go:352] Setting JSON to false
	I0906 12:35:12.172071    7672 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5682,"bootTime":1725645630,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:35:12.172164    7672 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:35:12.178816    7672 out.go:177] * [false-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:35:12.186812    7672 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:35:12.186869    7672 notify.go:220] Checking for updates...
	I0906 12:35:12.194661    7672 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:35:12.197832    7672 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:35:12.200764    7672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:35:12.208725    7672 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:35:12.211755    7672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:35:12.215081    7672 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:35:12.215164    7672 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:35:12.215210    7672 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:35:12.219719    7672 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:35:12.226752    7672 start.go:297] selected driver: qemu2
	I0906 12:35:12.226759    7672 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:35:12.226766    7672 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:35:12.229312    7672 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:35:12.232700    7672 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:35:12.235860    7672 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:35:12.235891    7672 cni.go:84] Creating CNI manager for "false"
	I0906 12:35:12.235920    7672 start.go:340] cluster config:
	{Name:false-269000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/
var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:35:12.239956    7672 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:35:12.245737    7672 out.go:177] * Starting "false-269000" primary control-plane node in "false-269000" cluster
	I0906 12:35:12.249762    7672 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:35:12.249781    7672 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:35:12.249799    7672 cache.go:56] Caching tarball of preloaded images
	I0906 12:35:12.249869    7672 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:35:12.249876    7672 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:35:12.249942    7672 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/false-269000/config.json ...
	I0906 12:35:12.249954    7672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/false-269000/config.json: {Name:mk60c21bb6a5080a98b173f896560e388ca1113b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:35:12.250315    7672 start.go:360] acquireMachinesLock for false-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:35:12.250357    7672 start.go:364] duration metric: took 34.084µs to acquireMachinesLock for "false-269000"
	I0906 12:35:12.250371    7672 start.go:93] Provisioning new machine with config: &{Name:false-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-269000 Namespac
e:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:35:12.250419    7672 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:35:12.258730    7672 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:35:12.277714    7672 start.go:159] libmachine.API.Create for "false-269000" (driver="qemu2")
	I0906 12:35:12.277745    7672 client.go:168] LocalClient.Create starting
	I0906 12:35:12.277818    7672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:35:12.277850    7672 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:12.277861    7672 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:12.277899    7672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:35:12.277925    7672 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:12.277931    7672 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:12.278383    7672 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:35:12.455639    7672 main.go:141] libmachine: Creating SSH key...
	I0906 12:35:12.529061    7672 main.go:141] libmachine: Creating Disk image...
	I0906 12:35:12.529066    7672 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:35:12.529215    7672 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2
	I0906 12:35:12.538255    7672 main.go:141] libmachine: STDOUT: 
	I0906 12:35:12.538273    7672 main.go:141] libmachine: STDERR: 
	I0906 12:35:12.538315    7672 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2 +20000M
	I0906 12:35:12.546188    7672 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:35:12.546204    7672 main.go:141] libmachine: STDERR: 
	I0906 12:35:12.546216    7672 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2
	I0906 12:35:12.546230    7672 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:35:12.546243    7672 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:35:12.546264    7672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:f5:d5:f5:14:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2
	I0906 12:35:12.547892    7672 main.go:141] libmachine: STDOUT: 
	I0906 12:35:12.547908    7672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:35:12.547925    7672 client.go:171] duration metric: took 270.177084ms to LocalClient.Create
	I0906 12:35:14.550115    7672 start.go:128] duration metric: took 2.299687709s to createHost
	I0906 12:35:14.550223    7672 start.go:83] releasing machines lock for "false-269000", held for 2.299872334s
	W0906 12:35:14.550274    7672 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:14.564508    7672 out.go:177] * Deleting "false-269000" in qemu2 ...
	W0906 12:35:14.594200    7672 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:14.594221    7672 start.go:729] Will try again in 5 seconds ...
	I0906 12:35:19.596426    7672 start.go:360] acquireMachinesLock for false-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:35:19.596975    7672 start.go:364] duration metric: took 401.583µs to acquireMachinesLock for "false-269000"
	I0906 12:35:19.597119    7672 start.go:93] Provisioning new machine with config: &{Name:false-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-269000 Namespac
e:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:35:19.597406    7672 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:35:19.614886    7672 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:35:19.665139    7672 start.go:159] libmachine.API.Create for "false-269000" (driver="qemu2")
	I0906 12:35:19.665187    7672 client.go:168] LocalClient.Create starting
	I0906 12:35:19.665290    7672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:35:19.665346    7672 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:19.665364    7672 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:19.665433    7672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:35:19.665480    7672 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:19.665496    7672 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:19.666171    7672 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:35:19.840978    7672 main.go:141] libmachine: Creating SSH key...
	I0906 12:35:19.900276    7672 main.go:141] libmachine: Creating Disk image...
	I0906 12:35:19.900281    7672 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:35:19.900435    7672 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2
	I0906 12:35:19.909805    7672 main.go:141] libmachine: STDOUT: 
	I0906 12:35:19.909820    7672 main.go:141] libmachine: STDERR: 
	I0906 12:35:19.909876    7672 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2 +20000M
	I0906 12:35:19.917702    7672 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:35:19.917719    7672 main.go:141] libmachine: STDERR: 
	I0906 12:35:19.917730    7672 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2
	I0906 12:35:19.917734    7672 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:35:19.917744    7672 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:35:19.917774    7672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:af:f1:09:f2:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/false-269000/disk.qcow2
	I0906 12:35:19.919478    7672 main.go:141] libmachine: STDOUT: 
	I0906 12:35:19.919493    7672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:35:19.919504    7672 client.go:171] duration metric: took 254.312709ms to LocalClient.Create
	I0906 12:35:21.921679    7672 start.go:128] duration metric: took 2.324242417s to createHost
	I0906 12:35:21.921734    7672 start.go:83] releasing machines lock for "false-269000", held for 2.324736208s
	W0906 12:35:21.922036    7672 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:21.935736    7672 out.go:201] 
	W0906 12:35:21.938837    7672 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:35:21.938880    7672 out.go:270] * 
	* 
	W0906 12:35:21.941450    7672 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:35:21.949582    7672 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.750645667s)

                                                
                                                
-- stdout --
	* [enable-default-cni-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-269000" primary control-plane node in "enable-default-cni-269000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:35:24.176133    7781 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:35:24.176268    7781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:35:24.176272    7781 out.go:358] Setting ErrFile to fd 2...
	I0906 12:35:24.176274    7781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:35:24.176412    7781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:35:24.177518    7781 out.go:352] Setting JSON to false
	I0906 12:35:24.193852    7781 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5694,"bootTime":1725645630,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:35:24.193909    7781 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:35:24.200272    7781 out.go:177] * [enable-default-cni-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:35:24.208205    7781 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:35:24.208254    7781 notify.go:220] Checking for updates...
	I0906 12:35:24.214210    7781 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:35:24.217191    7781 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:35:24.220244    7781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:35:24.223163    7781 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:35:24.226203    7781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:35:24.229610    7781 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:35:24.229683    7781 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:35:24.229735    7781 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:35:24.234118    7781 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:35:24.241170    7781 start.go:297] selected driver: qemu2
	I0906 12:35:24.241175    7781 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:35:24.241181    7781 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:35:24.243666    7781 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:35:24.246204    7781 out.go:177] * Automatically selected the socket_vmnet network
	E0906 12:35:24.249280    7781 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0906 12:35:24.249294    7781 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:35:24.249343    7781 cni.go:84] Creating CNI manager for "bridge"
	I0906 12:35:24.249349    7781 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:35:24.249391    7781 start.go:340] cluster config:
	{Name:enable-default-cni-269000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:35:24.253241    7781 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:35:24.261145    7781 out.go:177] * Starting "enable-default-cni-269000" primary control-plane node in "enable-default-cni-269000" cluster
	I0906 12:35:24.265199    7781 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:35:24.265215    7781 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:35:24.265228    7781 cache.go:56] Caching tarball of preloaded images
	I0906 12:35:24.265297    7781 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:35:24.265303    7781 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:35:24.265388    7781 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/enable-default-cni-269000/config.json ...
	I0906 12:35:24.265402    7781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/enable-default-cni-269000/config.json: {Name:mka9e4f206cfbc4ba7bad7a5a1cf975e7e7c52f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:35:24.265625    7781 start.go:360] acquireMachinesLock for enable-default-cni-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:35:24.265662    7781 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "enable-default-cni-269000"
	I0906 12:35:24.265674    7781 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-d
efault-cni-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:35:24.265703    7781 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:35:24.274177    7781 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:35:24.292751    7781 start.go:159] libmachine.API.Create for "enable-default-cni-269000" (driver="qemu2")
	I0906 12:35:24.292773    7781 client.go:168] LocalClient.Create starting
	I0906 12:35:24.292834    7781 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:35:24.292863    7781 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:24.292880    7781 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:24.292912    7781 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:35:24.292936    7781 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:24.292942    7781 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:24.293298    7781 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:35:24.457016    7781 main.go:141] libmachine: Creating SSH key...
	I0906 12:35:24.487665    7781 main.go:141] libmachine: Creating Disk image...
	I0906 12:35:24.487670    7781 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:35:24.487827    7781 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2
	I0906 12:35:24.496980    7781 main.go:141] libmachine: STDOUT: 
	I0906 12:35:24.496997    7781 main.go:141] libmachine: STDERR: 
	I0906 12:35:24.497039    7781 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2 +20000M
	I0906 12:35:24.504889    7781 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:35:24.504903    7781 main.go:141] libmachine: STDERR: 
	I0906 12:35:24.504923    7781 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2
	I0906 12:35:24.504932    7781 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:35:24.504946    7781 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:35:24.504974    7781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:db:4f:a9:d0:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2
	I0906 12:35:24.506554    7781 main.go:141] libmachine: STDOUT: 
	I0906 12:35:24.506568    7781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:35:24.506586    7781 client.go:171] duration metric: took 213.808167ms to LocalClient.Create
	I0906 12:35:26.508764    7781 start.go:128] duration metric: took 2.243053583s to createHost
	I0906 12:35:26.508827    7781 start.go:83] releasing machines lock for "enable-default-cni-269000", held for 2.243172s
	W0906 12:35:26.508923    7781 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:26.520384    7781 out.go:177] * Deleting "enable-default-cni-269000" in qemu2 ...
	W0906 12:35:26.551495    7781 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:26.551544    7781 start.go:729] Will try again in 5 seconds ...
	I0906 12:35:31.553718    7781 start.go:360] acquireMachinesLock for enable-default-cni-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:35:31.554109    7781 start.go:364] duration metric: took 314.5µs to acquireMachinesLock for "enable-default-cni-269000"
	I0906 12:35:31.554238    7781 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-d
efault-cni-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:35:31.554531    7781 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:35:31.572144    7781 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:35:31.620898    7781 start.go:159] libmachine.API.Create for "enable-default-cni-269000" (driver="qemu2")
	I0906 12:35:31.620950    7781 client.go:168] LocalClient.Create starting
	I0906 12:35:31.621055    7781 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:35:31.621110    7781 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:31.621124    7781 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:31.621188    7781 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:35:31.621231    7781 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:31.621243    7781 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:31.621756    7781 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:35:31.794510    7781 main.go:141] libmachine: Creating SSH key...
	I0906 12:35:31.832568    7781 main.go:141] libmachine: Creating Disk image...
	I0906 12:35:31.832573    7781 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:35:31.832728    7781 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2
	I0906 12:35:31.841720    7781 main.go:141] libmachine: STDOUT: 
	I0906 12:35:31.841743    7781 main.go:141] libmachine: STDERR: 
	I0906 12:35:31.841796    7781 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2 +20000M
	I0906 12:35:31.849651    7781 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:35:31.849665    7781 main.go:141] libmachine: STDERR: 
	I0906 12:35:31.849677    7781 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2
	I0906 12:35:31.849680    7781 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:35:31.849690    7781 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:35:31.849714    7781 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:11:bd:66:96:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/enable-default-cni-269000/disk.qcow2
	I0906 12:35:31.851319    7781 main.go:141] libmachine: STDOUT: 
	I0906 12:35:31.851334    7781 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:35:31.851346    7781 client.go:171] duration metric: took 230.393583ms to LocalClient.Create
	I0906 12:35:33.853586    7781 start.go:128] duration metric: took 2.29899725s to createHost
	I0906 12:35:33.853656    7781 start.go:83] releasing machines lock for "enable-default-cni-269000", held for 2.299540875s
	W0906 12:35:33.853917    7781 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:33.868292    7781 out.go:201] 
	W0906 12:35:33.873360    7781 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:35:33.873402    7781 out.go:270] * 
	* 
	W0906 12:35:33.875905    7781 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:35:33.884305    7781 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.121660708s)

                                                
                                                
-- stdout --
	* [flannel-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-269000" primary control-plane node in "flannel-269000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:35:36.092553    7893 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:35:36.092702    7893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:35:36.092705    7893 out.go:358] Setting ErrFile to fd 2...
	I0906 12:35:36.092707    7893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:35:36.092838    7893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:35:36.093872    7893 out.go:352] Setting JSON to false
	I0906 12:35:36.110000    7893 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5706,"bootTime":1725645630,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:35:36.110065    7893 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:35:36.116639    7893 out.go:177] * [flannel-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:35:36.124854    7893 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:35:36.124900    7893 notify.go:220] Checking for updates...
	I0906 12:35:36.131802    7893 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:35:36.134819    7893 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:35:36.137699    7893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:35:36.140765    7893 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:35:36.143816    7893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:35:36.145443    7893 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:35:36.145511    7893 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:35:36.145569    7893 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:35:36.149803    7893 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:35:36.156634    7893 start.go:297] selected driver: qemu2
	I0906 12:35:36.156641    7893 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:35:36.156650    7893 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:35:36.158967    7893 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:35:36.161808    7893 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:35:36.164923    7893 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:35:36.164947    7893 cni.go:84] Creating CNI manager for "flannel"
	I0906 12:35:36.164952    7893 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0906 12:35:36.164995    7893 start.go:340] cluster config:
	{Name:flannel-269000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVM
netPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:35:36.168590    7893 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:35:36.175746    7893 out.go:177] * Starting "flannel-269000" primary control-plane node in "flannel-269000" cluster
	I0906 12:35:36.179798    7893 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:35:36.179814    7893 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:35:36.179823    7893 cache.go:56] Caching tarball of preloaded images
	I0906 12:35:36.179876    7893 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:35:36.179882    7893 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:35:36.179950    7893 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/flannel-269000/config.json ...
	I0906 12:35:36.179962    7893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/flannel-269000/config.json: {Name:mk621451484e4738cc5eedfcaefd3b9e729c5fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:35:36.180173    7893 start.go:360] acquireMachinesLock for flannel-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:35:36.180206    7893 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "flannel-269000"
	I0906 12:35:36.180219    7893 start.go:93] Provisioning new machine with config: &{Name:flannel-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-269000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:35:36.180245    7893 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:35:36.188802    7893 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:35:36.205444    7893 start.go:159] libmachine.API.Create for "flannel-269000" (driver="qemu2")
	I0906 12:35:36.205467    7893 client.go:168] LocalClient.Create starting
	I0906 12:35:36.205528    7893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:35:36.205559    7893 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:36.205568    7893 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:36.205603    7893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:35:36.205625    7893 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:36.205635    7893 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:36.205979    7893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:35:36.367835    7893 main.go:141] libmachine: Creating SSH key...
	I0906 12:35:36.661577    7893 main.go:141] libmachine: Creating Disk image...
	I0906 12:35:36.661589    7893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:35:36.661835    7893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2
	I0906 12:35:36.671764    7893 main.go:141] libmachine: STDOUT: 
	I0906 12:35:36.671784    7893 main.go:141] libmachine: STDERR: 
	I0906 12:35:36.671836    7893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2 +20000M
	I0906 12:35:36.679788    7893 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:35:36.679802    7893 main.go:141] libmachine: STDERR: 
	I0906 12:35:36.679817    7893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2
	I0906 12:35:36.679820    7893 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:35:36.679835    7893 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:35:36.679872    7893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:1c:e9:77:ec:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2
	I0906 12:35:36.681527    7893 main.go:141] libmachine: STDOUT: 
	I0906 12:35:36.681547    7893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:35:36.681566    7893 client.go:171] duration metric: took 476.097125ms to LocalClient.Create
	I0906 12:35:38.683770    7893 start.go:128] duration metric: took 2.503508042s to createHost
	I0906 12:35:38.683842    7893 start.go:83] releasing machines lock for "flannel-269000", held for 2.503644958s
	W0906 12:35:38.683915    7893 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:38.691175    7893 out.go:177] * Deleting "flannel-269000" in qemu2 ...
	W0906 12:35:38.720891    7893 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:38.720920    7893 start.go:729] Will try again in 5 seconds ...
	I0906 12:35:43.723110    7893 start.go:360] acquireMachinesLock for flannel-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:35:43.723530    7893 start.go:364] duration metric: took 349.708µs to acquireMachinesLock for "flannel-269000"
	I0906 12:35:43.723654    7893 start.go:93] Provisioning new machine with config: &{Name:flannel-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-269000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:35:43.723909    7893 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:35:43.735930    7893 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:35:43.789908    7893 start.go:159] libmachine.API.Create for "flannel-269000" (driver="qemu2")
	I0906 12:35:43.789965    7893 client.go:168] LocalClient.Create starting
	I0906 12:35:43.790076    7893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:35:43.790148    7893 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:43.790164    7893 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:43.790235    7893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:35:43.790282    7893 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:43.790297    7893 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:43.790901    7893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:35:43.963858    7893 main.go:141] libmachine: Creating SSH key...
	I0906 12:35:44.120848    7893 main.go:141] libmachine: Creating Disk image...
	I0906 12:35:44.120856    7893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:35:44.121021    7893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2
	I0906 12:35:44.130379    7893 main.go:141] libmachine: STDOUT: 
	I0906 12:35:44.130400    7893 main.go:141] libmachine: STDERR: 
	I0906 12:35:44.130475    7893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2 +20000M
	I0906 12:35:44.138474    7893 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:35:44.138488    7893 main.go:141] libmachine: STDERR: 
	I0906 12:35:44.138503    7893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2
	I0906 12:35:44.138511    7893 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:35:44.138524    7893 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:35:44.138562    7893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:d9:3e:c9:85:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/flannel-269000/disk.qcow2
	I0906 12:35:44.140234    7893 main.go:141] libmachine: STDOUT: 
	I0906 12:35:44.140249    7893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:35:44.140263    7893 client.go:171] duration metric: took 350.294667ms to LocalClient.Create
	I0906 12:35:46.142525    7893 start.go:128] duration metric: took 2.418521083s to createHost
	I0906 12:35:46.142617    7893 start.go:83] releasing machines lock for "flannel-269000", held for 2.419075833s
	W0906 12:35:46.142997    7893 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:46.151963    7893 out.go:201] 
	W0906 12:35:46.160122    7893 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:35:46.160147    7893 out.go:270] * 
	* 
	W0906 12:35:46.162725    7893 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:35:46.172022    7893 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.783428834s)

                                                
                                                
-- stdout --
	* [bridge-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-269000" primary control-plane node in "bridge-269000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:35:48.588103    8012 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:35:48.588246    8012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:35:48.588250    8012 out.go:358] Setting ErrFile to fd 2...
	I0906 12:35:48.588252    8012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:35:48.588423    8012 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:35:48.589526    8012 out.go:352] Setting JSON to false
	I0906 12:35:48.606013    8012 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5718,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:35:48.606095    8012 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:35:48.612930    8012 out.go:177] * [bridge-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:35:48.619819    8012 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:35:48.619860    8012 notify.go:220] Checking for updates...
	I0906 12:35:48.626824    8012 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:35:48.629857    8012 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:35:48.632846    8012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:35:48.635843    8012 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:35:48.638875    8012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:35:48.642269    8012 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:35:48.642338    8012 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:35:48.642385    8012 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:35:48.646819    8012 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:35:48.653870    8012 start.go:297] selected driver: qemu2
	I0906 12:35:48.653879    8012 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:35:48.653887    8012 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:35:48.656386    8012 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:35:48.659764    8012 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:35:48.662912    8012 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:35:48.662965    8012 cni.go:84] Creating CNI manager for "bridge"
	I0906 12:35:48.662969    8012 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:35:48.662999    8012 start.go:340] cluster config:
	{Name:bridge-269000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnet
Path:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:35:48.666720    8012 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:35:48.672802    8012 out.go:177] * Starting "bridge-269000" primary control-plane node in "bridge-269000" cluster
	I0906 12:35:48.676836    8012 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:35:48.676852    8012 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:35:48.676864    8012 cache.go:56] Caching tarball of preloaded images
	I0906 12:35:48.676929    8012 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:35:48.676949    8012 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:35:48.677020    8012 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/bridge-269000/config.json ...
	I0906 12:35:48.677033    8012 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/bridge-269000/config.json: {Name:mkbc10cdf805aa7a1203526414b1e4ed9db16e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:35:48.677402    8012 start.go:360] acquireMachinesLock for bridge-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:35:48.677440    8012 start.go:364] duration metric: took 30.708µs to acquireMachinesLock for "bridge-269000"
	I0906 12:35:48.677452    8012 start.go:93] Provisioning new machine with config: &{Name:bridge-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-269000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:35:48.677485    8012 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:35:48.685852    8012 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:35:48.703657    8012 start.go:159] libmachine.API.Create for "bridge-269000" (driver="qemu2")
	I0906 12:35:48.703684    8012 client.go:168] LocalClient.Create starting
	I0906 12:35:48.703741    8012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:35:48.703774    8012 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:48.703784    8012 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:48.703819    8012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:35:48.703843    8012 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:48.703852    8012 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:48.704359    8012 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:35:48.868374    8012 main.go:141] libmachine: Creating SSH key...
	I0906 12:35:48.909855    8012 main.go:141] libmachine: Creating Disk image...
	I0906 12:35:48.909860    8012 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:35:48.910040    8012 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2
	I0906 12:35:48.919298    8012 main.go:141] libmachine: STDOUT: 
	I0906 12:35:48.919317    8012 main.go:141] libmachine: STDERR: 
	I0906 12:35:48.919370    8012 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2 +20000M
	I0906 12:35:48.927194    8012 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:35:48.927215    8012 main.go:141] libmachine: STDERR: 
	I0906 12:35:48.927224    8012 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2
	I0906 12:35:48.927230    8012 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:35:48.927242    8012 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:35:48.927272    8012 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:47:0c:03:cd:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2
	I0906 12:35:48.928892    8012 main.go:141] libmachine: STDOUT: 
	I0906 12:35:48.928908    8012 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:35:48.928926    8012 client.go:171] duration metric: took 225.24025ms to LocalClient.Create
	I0906 12:35:50.931109    8012 start.go:128] duration metric: took 2.253614375s to createHost
	I0906 12:35:50.931186    8012 start.go:83] releasing machines lock for "bridge-269000", held for 2.253752959s
	W0906 12:35:50.931265    8012 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:50.949410    8012 out.go:177] * Deleting "bridge-269000" in qemu2 ...
	W0906 12:35:50.981106    8012 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:50.981128    8012 start.go:729] Will try again in 5 seconds ...
	I0906 12:35:55.983330    8012 start.go:360] acquireMachinesLock for bridge-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:35:55.983728    8012 start.go:364] duration metric: took 290.584µs to acquireMachinesLock for "bridge-269000"
	I0906 12:35:55.983865    8012 start.go:93] Provisioning new machine with config: &{Name:bridge-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-269000 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:35:55.984181    8012 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:35:56.001891    8012 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:35:56.051816    8012 start.go:159] libmachine.API.Create for "bridge-269000" (driver="qemu2")
	I0906 12:35:56.051880    8012 client.go:168] LocalClient.Create starting
	I0906 12:35:56.051981    8012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:35:56.052063    8012 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:56.052099    8012 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:56.052164    8012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:35:56.052209    8012 main.go:141] libmachine: Decoding PEM data...
	I0906 12:35:56.052223    8012 main.go:141] libmachine: Parsing certificate...
	I0906 12:35:56.052879    8012 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:35:56.224868    8012 main.go:141] libmachine: Creating SSH key...
	I0906 12:35:56.278547    8012 main.go:141] libmachine: Creating Disk image...
	I0906 12:35:56.278553    8012 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:35:56.278725    8012 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2
	I0906 12:35:56.288022    8012 main.go:141] libmachine: STDOUT: 
	I0906 12:35:56.288038    8012 main.go:141] libmachine: STDERR: 
	I0906 12:35:56.288110    8012 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2 +20000M
	I0906 12:35:56.295903    8012 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:35:56.295929    8012 main.go:141] libmachine: STDERR: 
	I0906 12:35:56.295941    8012 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2
	I0906 12:35:56.295950    8012 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:35:56.295959    8012 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:35:56.295983    8012 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:38:98:aa:12:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/bridge-269000/disk.qcow2
	I0906 12:35:56.297598    8012 main.go:141] libmachine: STDOUT: 
	I0906 12:35:56.297614    8012 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:35:56.297627    8012 client.go:171] duration metric: took 245.742708ms to LocalClient.Create
	I0906 12:35:58.299816    8012 start.go:128] duration metric: took 2.315597125s to createHost
	I0906 12:35:58.299873    8012 start.go:83] releasing machines lock for "bridge-269000", held for 2.316135958s
	W0906 12:35:58.300194    8012 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:35:58.309932    8012 out.go:201] 
	W0906 12:35:58.317026    8012 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:35:58.317052    8012 out.go:270] * 
	* 
	W0906 12:35:58.319557    8012 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:35:58.327821    8012 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-269000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.884378417s)

                                                
                                                
-- stdout --
	* [kubenet-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-269000" primary control-plane node in "kubenet-269000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-269000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:36:00.519841    8123 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:36:00.519972    8123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:00.519975    8123 out.go:358] Setting ErrFile to fd 2...
	I0906 12:36:00.519977    8123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:00.520113    8123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:36:00.521190    8123 out.go:352] Setting JSON to false
	I0906 12:36:00.537472    8123 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5730,"bootTime":1725645630,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:36:00.537553    8123 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:36:00.544123    8123 out.go:177] * [kubenet-269000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:36:00.552041    8123 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:36:00.552102    8123 notify.go:220] Checking for updates...
	I0906 12:36:00.559014    8123 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:36:00.562013    8123 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:36:00.565039    8123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:36:00.567977    8123 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:36:00.570959    8123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:36:00.574321    8123 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:36:00.574392    8123 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:36:00.574438    8123 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:36:00.578884    8123 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:36:00.585995    8123 start.go:297] selected driver: qemu2
	I0906 12:36:00.586004    8123 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:36:00.586012    8123 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:36:00.588447    8123 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:36:00.591957    8123 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:36:00.595112    8123 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:36:00.595141    8123 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0906 12:36:00.595165    8123 start.go:340] cluster config:
	{Name:kubenet-269000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-269000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnet
Path:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:36:00.599028    8123 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:00.605952    8123 out.go:177] * Starting "kubenet-269000" primary control-plane node in "kubenet-269000" cluster
	I0906 12:36:00.609965    8123 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:36:00.609979    8123 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:36:00.609989    8123 cache.go:56] Caching tarball of preloaded images
	I0906 12:36:00.610060    8123 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:36:00.610066    8123 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:36:00.610134    8123 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/kubenet-269000/config.json ...
	I0906 12:36:00.610147    8123 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/kubenet-269000/config.json: {Name:mk916567d1666ef1ba53da4236b7790fe1cbaf97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:36:00.610506    8123 start.go:360] acquireMachinesLock for kubenet-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:00.610544    8123 start.go:364] duration metric: took 30.541µs to acquireMachinesLock for "kubenet-269000"
	I0906 12:36:00.610557    8123 start.go:93] Provisioning new machine with config: &{Name:kubenet-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-269000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:36:00.610595    8123 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:36:00.618977    8123 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:36:00.637133    8123 start.go:159] libmachine.API.Create for "kubenet-269000" (driver="qemu2")
	I0906 12:36:00.637162    8123 client.go:168] LocalClient.Create starting
	I0906 12:36:00.637230    8123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:36:00.637259    8123 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:00.637269    8123 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:00.637308    8123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:36:00.637332    8123 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:00.637341    8123 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:00.637729    8123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:36:00.802838    8123 main.go:141] libmachine: Creating SSH key...
	I0906 12:36:00.927126    8123 main.go:141] libmachine: Creating Disk image...
	I0906 12:36:00.927132    8123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:36:00.927297    8123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2
	I0906 12:36:00.936388    8123 main.go:141] libmachine: STDOUT: 
	I0906 12:36:00.936405    8123 main.go:141] libmachine: STDERR: 
	I0906 12:36:00.936457    8123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2 +20000M
	I0906 12:36:00.944240    8123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:36:00.944255    8123 main.go:141] libmachine: STDERR: 
	I0906 12:36:00.944276    8123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2
	I0906 12:36:00.944282    8123 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:36:00.944295    8123 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:00.944324    8123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:9e:43:61:b6:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2
	I0906 12:36:00.945921    8123 main.go:141] libmachine: STDOUT: 
	I0906 12:36:00.945937    8123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:00.945955    8123 client.go:171] duration metric: took 308.7895ms to LocalClient.Create
	I0906 12:36:02.948061    8123 start.go:128] duration metric: took 2.337467625s to createHost
	I0906 12:36:02.948092    8123 start.go:83] releasing machines lock for "kubenet-269000", held for 2.337558209s
	W0906 12:36:02.948148    8123 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:02.954169    8123 out.go:177] * Deleting "kubenet-269000" in qemu2 ...
	W0906 12:36:02.977330    8123 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:02.977351    8123 start.go:729] Will try again in 5 seconds ...
	I0906 12:36:07.979564    8123 start.go:360] acquireMachinesLock for kubenet-269000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:07.980057    8123 start.go:364] duration metric: took 380.083µs to acquireMachinesLock for "kubenet-269000"
	I0906 12:36:07.980207    8123 start.go:93] Provisioning new machine with config: &{Name:kubenet-269000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-269000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:36:07.980559    8123 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:36:07.992136    8123 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 12:36:08.042702    8123 start.go:159] libmachine.API.Create for "kubenet-269000" (driver="qemu2")
	I0906 12:36:08.042753    8123 client.go:168] LocalClient.Create starting
	I0906 12:36:08.042868    8123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:36:08.042940    8123 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:08.042957    8123 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:08.043028    8123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:36:08.043079    8123 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:08.043094    8123 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:08.043616    8123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:36:08.218128    8123 main.go:141] libmachine: Creating SSH key...
	I0906 12:36:08.309766    8123 main.go:141] libmachine: Creating Disk image...
	I0906 12:36:08.309771    8123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:36:08.309939    8123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2
	I0906 12:36:08.319081    8123 main.go:141] libmachine: STDOUT: 
	I0906 12:36:08.319101    8123 main.go:141] libmachine: STDERR: 
	I0906 12:36:08.319157    8123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2 +20000M
	I0906 12:36:08.327095    8123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:36:08.327114    8123 main.go:141] libmachine: STDERR: 
	I0906 12:36:08.327126    8123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2
	I0906 12:36:08.327129    8123 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:36:08.327140    8123 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:08.327177    8123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:10:ad:72:d0:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/kubenet-269000/disk.qcow2
	I0906 12:36:08.328823    8123 main.go:141] libmachine: STDOUT: 
	I0906 12:36:08.328839    8123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:08.328852    8123 client.go:171] duration metric: took 286.096292ms to LocalClient.Create
	I0906 12:36:10.331015    8123 start.go:128] duration metric: took 2.350418292s to createHost
	I0906 12:36:10.331078    8123 start.go:83] releasing machines lock for "kubenet-269000", held for 2.350999125s
	W0906 12:36:10.331462    8123 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-269000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:10.347044    8123 out.go:201] 
	W0906 12:36:10.351222    8123 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:36:10.351249    8123 out.go:270] * 
	* 
	W0906 12:36:10.353730    8123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:36:10.362059    8123 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-504000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-504000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.003362292s)

                                                
                                                
-- stdout --
	* [old-k8s-version-504000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-504000" primary control-plane node in "old-k8s-version-504000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-504000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:36:12.555661    8232 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:36:12.555819    8232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:12.555823    8232 out.go:358] Setting ErrFile to fd 2...
	I0906 12:36:12.555825    8232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:12.555951    8232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:36:12.556985    8232 out.go:352] Setting JSON to false
	I0906 12:36:12.573032    8232 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5742,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:36:12.573108    8232 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:36:12.578931    8232 out.go:177] * [old-k8s-version-504000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:36:12.586702    8232 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:36:12.586779    8232 notify.go:220] Checking for updates...
	I0906 12:36:12.593777    8232 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:36:12.596679    8232 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:36:12.599720    8232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:36:12.602804    8232 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:36:12.605760    8232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:36:12.609095    8232 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:36:12.609159    8232 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:36:12.609205    8232 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:36:12.613758    8232 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:36:12.620722    8232 start.go:297] selected driver: qemu2
	I0906 12:36:12.620729    8232 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:36:12.620745    8232 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:36:12.622976    8232 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:36:12.625769    8232 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:36:12.628794    8232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:36:12.628817    8232 cni.go:84] Creating CNI manager for ""
	I0906 12:36:12.628825    8232 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 12:36:12.628896    8232 start.go:340] cluster config:
	{Name:old-k8s-version-504000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:36:12.632362    8232 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:12.639757    8232 out.go:177] * Starting "old-k8s-version-504000" primary control-plane node in "old-k8s-version-504000" cluster
	I0906 12:36:12.643716    8232 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 12:36:12.643733    8232 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0906 12:36:12.643748    8232 cache.go:56] Caching tarball of preloaded images
	I0906 12:36:12.643808    8232 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:36:12.643814    8232 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0906 12:36:12.643885    8232 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/old-k8s-version-504000/config.json ...
	I0906 12:36:12.643902    8232 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/old-k8s-version-504000/config.json: {Name:mk07ece5742601aa107271bc56260514e641295b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:36:12.644110    8232 start.go:360] acquireMachinesLock for old-k8s-version-504000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:12.644149    8232 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "old-k8s-version-504000"
	I0906 12:36:12.644161    8232 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-vers
ion-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:36:12.644194    8232 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:36:12.652733    8232 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:36:12.670179    8232 start.go:159] libmachine.API.Create for "old-k8s-version-504000" (driver="qemu2")
	I0906 12:36:12.670202    8232 client.go:168] LocalClient.Create starting
	I0906 12:36:12.670261    8232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:36:12.670289    8232 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:12.670300    8232 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:12.670337    8232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:36:12.670359    8232 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:12.670364    8232 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:12.670708    8232 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:36:12.833601    8232 main.go:141] libmachine: Creating SSH key...
	I0906 12:36:13.008919    8232 main.go:141] libmachine: Creating Disk image...
	I0906 12:36:13.008925    8232 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:36:13.009118    8232 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2
	I0906 12:36:13.019019    8232 main.go:141] libmachine: STDOUT: 
	I0906 12:36:13.019042    8232 main.go:141] libmachine: STDERR: 
	I0906 12:36:13.019088    8232 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2 +20000M
	I0906 12:36:13.027013    8232 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:36:13.027029    8232 main.go:141] libmachine: STDERR: 
	I0906 12:36:13.027051    8232 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2
	I0906 12:36:13.027056    8232 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:36:13.027071    8232 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:13.027094    8232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:42:ce:0b:75:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2
	I0906 12:36:13.028770    8232 main.go:141] libmachine: STDOUT: 
	I0906 12:36:13.028786    8232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:13.028806    8232 client.go:171] duration metric: took 358.597708ms to LocalClient.Create
	I0906 12:36:15.030994    8232 start.go:128] duration metric: took 2.386795458s to createHost
	I0906 12:36:15.031049    8232 start.go:83] releasing machines lock for "old-k8s-version-504000", held for 2.38690825s
	W0906 12:36:15.031099    8232 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:15.043340    8232 out.go:177] * Deleting "old-k8s-version-504000" in qemu2 ...
	W0906 12:36:15.076463    8232 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:15.076492    8232 start.go:729] Will try again in 5 seconds ...
	I0906 12:36:20.078715    8232 start.go:360] acquireMachinesLock for old-k8s-version-504000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:20.079121    8232 start.go:364] duration metric: took 328µs to acquireMachinesLock for "old-k8s-version-504000"
	I0906 12:36:20.079250    8232 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-vers
ion-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:36:20.079547    8232 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:36:20.090129    8232 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:36:20.140276    8232 start.go:159] libmachine.API.Create for "old-k8s-version-504000" (driver="qemu2")
	I0906 12:36:20.140321    8232 client.go:168] LocalClient.Create starting
	I0906 12:36:20.140443    8232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:36:20.140502    8232 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:20.140516    8232 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:20.140580    8232 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:36:20.140624    8232 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:20.140638    8232 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:20.141148    8232 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:36:20.313937    8232 main.go:141] libmachine: Creating SSH key...
	I0906 12:36:20.462722    8232 main.go:141] libmachine: Creating Disk image...
	I0906 12:36:20.462728    8232 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:36:20.462903    8232 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2
	I0906 12:36:20.472584    8232 main.go:141] libmachine: STDOUT: 
	I0906 12:36:20.472604    8232 main.go:141] libmachine: STDERR: 
	I0906 12:36:20.472651    8232 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2 +20000M
	I0906 12:36:20.480563    8232 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:36:20.480577    8232 main.go:141] libmachine: STDERR: 
	I0906 12:36:20.480586    8232 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2
	I0906 12:36:20.480597    8232 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:36:20.480607    8232 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:20.480633    8232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:e2:e9:57:b6:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2
	I0906 12:36:20.482248    8232 main.go:141] libmachine: STDOUT: 
	I0906 12:36:20.482263    8232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:20.482277    8232 client.go:171] duration metric: took 341.952209ms to LocalClient.Create
	I0906 12:36:22.484499    8232 start.go:128] duration metric: took 2.404944959s to createHost
	I0906 12:36:22.484566    8232 start.go:83] releasing machines lock for "old-k8s-version-504000", held for 2.405436125s
	W0906 12:36:22.484876    8232 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-504000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-504000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:22.498817    8232 out.go:201] 
	W0906 12:36:22.503976    8232 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:36:22.504012    8232 out.go:270] * 
	* 
	W0906 12:36:22.506763    8232 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:36:22.517747    8232 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-504000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000: exit status 7 (66.929542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-504000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-504000 create -f testdata/busybox.yaml: exit status 1 (28.789292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-504000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-504000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000: exit status 7 (30.37675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-504000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000: exit status 7 (29.441208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-504000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-504000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-504000 describe deploy/metrics-server -n kube-system: exit status 1 (26.720334ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-504000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-504000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000: exit status 7 (29.7325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-504000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-504000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.184618333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-504000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-504000" primary control-plane node in "old-k8s-version-504000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:36:26.573892    8282 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:36:26.573994    8282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:26.573997    8282 out.go:358] Setting ErrFile to fd 2...
	I0906 12:36:26.573999    8282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:26.574155    8282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:36:26.575191    8282 out.go:352] Setting JSON to false
	I0906 12:36:26.591495    8282 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5756,"bootTime":1725645630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:36:26.591571    8282 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:36:26.595870    8282 out.go:177] * [old-k8s-version-504000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:36:26.602732    8282 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:36:26.602784    8282 notify.go:220] Checking for updates...
	I0906 12:36:26.610699    8282 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:36:26.613724    8282 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:36:26.615031    8282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:36:26.617666    8282 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:36:26.620716    8282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:36:26.624089    8282 config.go:182] Loaded profile config "old-k8s-version-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0906 12:36:26.627659    8282 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 12:36:26.630666    8282 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:36:26.634748    8282 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:36:26.641743    8282 start.go:297] selected driver: qemu2
	I0906 12:36:26.641754    8282 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version
-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/U
sers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:36:26.641827    8282 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:36:26.644268    8282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:36:26.644312    8282 cni.go:84] Creating CNI manager for ""
	I0906 12:36:26.644319    8282 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 12:36:26.644354    8282 start.go:340] cluster config:
	{Name:old-k8s-version-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:36:26.647962    8282 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:26.655764    8282 out.go:177] * Starting "old-k8s-version-504000" primary control-plane node in "old-k8s-version-504000" cluster
	I0906 12:36:26.659747    8282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 12:36:26.659768    8282 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0906 12:36:26.659781    8282 cache.go:56] Caching tarball of preloaded images
	I0906 12:36:26.659853    8282 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:36:26.659859    8282 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0906 12:36:26.659924    8282 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/old-k8s-version-504000/config.json ...
	I0906 12:36:26.660420    8282 start.go:360] acquireMachinesLock for old-k8s-version-504000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:26.660459    8282 start.go:364] duration metric: took 31.708µs to acquireMachinesLock for "old-k8s-version-504000"
	I0906 12:36:26.660470    8282 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:36:26.660474    8282 fix.go:54] fixHost starting: 
	I0906 12:36:26.660603    8282 fix.go:112] recreateIfNeeded on old-k8s-version-504000: state=Stopped err=<nil>
	W0906 12:36:26.660611    8282 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:36:26.664719    8282 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-504000" ...
	I0906 12:36:26.671693    8282 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:26.671739    8282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:e2:e9:57:b6:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2
	I0906 12:36:26.673957    8282 main.go:141] libmachine: STDOUT: 
	I0906 12:36:26.673977    8282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:26.674009    8282 fix.go:56] duration metric: took 13.535209ms for fixHost
	I0906 12:36:26.674013    8282 start.go:83] releasing machines lock for "old-k8s-version-504000", held for 13.549667ms
	W0906 12:36:26.674022    8282 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:36:26.674066    8282 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:26.674071    8282 start.go:729] Will try again in 5 seconds ...
	I0906 12:36:31.676221    8282 start.go:360] acquireMachinesLock for old-k8s-version-504000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:31.676689    8282 start.go:364] duration metric: took 341.458µs to acquireMachinesLock for "old-k8s-version-504000"
	I0906 12:36:31.676819    8282 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:36:31.676839    8282 fix.go:54] fixHost starting: 
	I0906 12:36:31.677520    8282 fix.go:112] recreateIfNeeded on old-k8s-version-504000: state=Stopped err=<nil>
	W0906 12:36:31.677548    8282 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:36:31.681930    8282 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-504000" ...
	I0906 12:36:31.686017    8282 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:31.686278    8282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:e2:e9:57:b6:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/old-k8s-version-504000/disk.qcow2
	I0906 12:36:31.695405    8282 main.go:141] libmachine: STDOUT: 
	I0906 12:36:31.695485    8282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:31.695559    8282 fix.go:56] duration metric: took 18.72075ms for fixHost
	I0906 12:36:31.695577    8282 start.go:83] releasing machines lock for "old-k8s-version-504000", held for 18.865417ms
	W0906 12:36:31.695809    8282 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:31.702901    8282 out.go:201] 
	W0906 12:36:31.707020    8282 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:36:31.707046    8282 out.go:270] * 
	* 
	W0906 12:36:31.709593    8282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:36:31.716833    8282 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-504000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000: exit status 7 (70.560458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-504000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000: exit status 7 (32.043709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-504000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-504000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-504000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.119625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-504000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-504000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000: exit status 7 (29.635708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-504000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000: exit status 7 (30.853083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-504000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-504000 --alsologtostderr -v=1: exit status 83 (41.762792ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-504000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-504000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:36:31.991589    8301 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:36:31.991992    8301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:31.991996    8301 out.go:358] Setting ErrFile to fd 2...
	I0906 12:36:31.991998    8301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:31.992177    8301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:36:31.992379    8301 out.go:352] Setting JSON to false
	I0906 12:36:31.992386    8301 mustload.go:65] Loading cluster: old-k8s-version-504000
	I0906 12:36:31.992582    8301 config.go:182] Loaded profile config "old-k8s-version-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0906 12:36:31.997069    8301 out.go:177] * The control-plane node old-k8s-version-504000 host is not running: state=Stopped
	I0906 12:36:32.000075    8301 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-504000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-504000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000: exit status 7 (29.472083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-504000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000: exit status 7 (29.781875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.859910125s)

                                                
                                                
-- stdout --
	* [no-preload-052000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-052000" primary control-plane node in "no-preload-052000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-052000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:36:32.313265    8318 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:36:32.313399    8318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:32.313402    8318 out.go:358] Setting ErrFile to fd 2...
	I0906 12:36:32.313404    8318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:32.313540    8318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:36:32.314593    8318 out.go:352] Setting JSON to false
	I0906 12:36:32.330875    8318 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5762,"bootTime":1725645630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:36:32.330949    8318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:36:32.336085    8318 out.go:177] * [no-preload-052000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:36:32.343112    8318 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:36:32.343170    8318 notify.go:220] Checking for updates...
	I0906 12:36:32.349052    8318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:36:32.352034    8318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:36:32.355066    8318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:36:32.358047    8318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:36:32.361026    8318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:36:32.364387    8318 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:36:32.364456    8318 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:36:32.364516    8318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:36:32.369024    8318 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:36:32.376064    8318 start.go:297] selected driver: qemu2
	I0906 12:36:32.376073    8318 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:36:32.376081    8318 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:36:32.378378    8318 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:36:32.382006    8318 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:36:32.383252    8318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:36:32.383314    8318 cni.go:84] Creating CNI manager for ""
	I0906 12:36:32.383323    8318 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:36:32.383333    8318 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:36:32.383360    8318 start.go:340] cluster config:
	{Name:no-preload-052000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-052000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMne
tPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:36:32.387062    8318 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:32.394054    8318 out.go:177] * Starting "no-preload-052000" primary control-plane node in "no-preload-052000" cluster
	I0906 12:36:32.397977    8318 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:36:32.398049    8318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/no-preload-052000/config.json ...
	I0906 12:36:32.398067    8318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/no-preload-052000/config.json: {Name:mk7be805f548d85b51e918ac04c79f281aa15adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:36:32.398058    8318 cache.go:107] acquiring lock: {Name:mkab7a7d4abedf3c4819d7aa829fcdb26da0e508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:32.398077    8318 cache.go:107] acquiring lock: {Name:mk76c033ec6d2d818e6e2bca8d18bd154a86e539 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:32.398089    8318 cache.go:107] acquiring lock: {Name:mk246fb06349ed49f91a202a9a119d50d36da8d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:32.398119    8318 cache.go:115] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 12:36:32.398128    8318 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 71.208µs
	I0906 12:36:32.398133    8318 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 12:36:32.398147    8318 cache.go:107] acquiring lock: {Name:mka70f64740da40ae49323c80f77dcdada267893 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:32.398225    8318 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0906 12:36:32.398057    8318 cache.go:107] acquiring lock: {Name:mk639270f1193cff7a30ed2fe3dcd65e696ca93c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:32.398247    8318 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 12:36:32.398264    8318 cache.go:107] acquiring lock: {Name:mkbe278f444f97437b24a7e46c9381c5dfe49810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:32.398303    8318 cache.go:107] acquiring lock: {Name:mkb46b4bb1822bf822b24ceae979dc93da60fe13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:32.398319    8318 cache.go:107] acquiring lock: {Name:mkc69174ae4790d7a20bd0ef227627f1992a059d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:32.398335    8318 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0906 12:36:32.398404    8318 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 12:36:32.398449    8318 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 12:36:32.398459    8318 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 12:36:32.398481    8318 start.go:360] acquireMachinesLock for no-preload-052000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:32.398516    8318 start.go:364] duration metric: took 30.041µs to acquireMachinesLock for "no-preload-052000"
	I0906 12:36:32.398535    8318 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 12:36:32.398527    8318 start.go:93] Provisioning new machine with config: &{Name:no-preload-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-05200
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:36:32.398557    8318 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:36:32.406038    8318 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:36:32.410706    8318 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 12:36:32.410772    8318 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0906 12:36:32.410821    8318 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 12:36:32.411185    8318 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0906 12:36:32.412109    8318 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 12:36:32.412267    8318 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 12:36:32.412647    8318 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 12:36:32.423702    8318 start.go:159] libmachine.API.Create for "no-preload-052000" (driver="qemu2")
	I0906 12:36:32.423737    8318 client.go:168] LocalClient.Create starting
	I0906 12:36:32.423848    8318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:36:32.423888    8318 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:32.423902    8318 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:32.423960    8318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:36:32.423990    8318 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:32.423998    8318 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:32.424404    8318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:36:32.587486    8318 main.go:141] libmachine: Creating SSH key...
	I0906 12:36:32.655489    8318 main.go:141] libmachine: Creating Disk image...
	I0906 12:36:32.655510    8318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:36:32.655694    8318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2
	I0906 12:36:32.665202    8318 main.go:141] libmachine: STDOUT: 
	I0906 12:36:32.665226    8318 main.go:141] libmachine: STDERR: 
	I0906 12:36:32.665280    8318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2 +20000M
	I0906 12:36:32.674003    8318 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:36:32.674022    8318 main.go:141] libmachine: STDERR: 
	I0906 12:36:32.674041    8318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2
	I0906 12:36:32.674044    8318 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:36:32.674054    8318 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:32.674081    8318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:f1:41:bd:d3:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2
	I0906 12:36:32.675954    8318 main.go:141] libmachine: STDOUT: 
	I0906 12:36:32.675975    8318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:32.675995    8318 client.go:171] duration metric: took 252.255416ms to LocalClient.Create
	I0906 12:36:32.820887    8318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0906 12:36:32.842119    8318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0906 12:36:32.843586    8318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0906 12:36:32.860730    8318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0906 12:36:32.892056    8318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0906 12:36:32.910699    8318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0906 12:36:32.910723    8318 cache.go:162] opening:  /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0906 12:36:33.009131    8318 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0906 12:36:33.009178    8318 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 611.093167ms
	I0906 12:36:33.009229    8318 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0906 12:36:34.676152    8318 start.go:128] duration metric: took 2.277579625s to createHost
	I0906 12:36:34.676250    8318 start.go:83] releasing machines lock for "no-preload-052000", held for 2.277741125s
	W0906 12:36:34.676288    8318 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:34.688407    8318 out.go:177] * Deleting "no-preload-052000" in qemu2 ...
	W0906 12:36:34.719047    8318 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:34.719078    8318 start.go:729] Will try again in 5 seconds ...
	I0906 12:36:35.892969    8318 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0906 12:36:35.893020    8318 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.494794916s
	I0906 12:36:35.893046    8318 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0906 12:36:36.189792    8318 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0906 12:36:36.189842    8318 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 3.791615541s
	I0906 12:36:36.189886    8318 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0906 12:36:36.544926    8318 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0906 12:36:36.544950    8318 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 4.1467145s
	I0906 12:36:36.544963    8318 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0906 12:36:36.971597    8318 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0906 12:36:36.971649    8318 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.573620167s
	I0906 12:36:36.971671    8318 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0906 12:36:37.612242    8318 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0906 12:36:37.612318    8318 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 5.214280209s
	I0906 12:36:37.612349    8318 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0906 12:36:39.719287    8318 start.go:360] acquireMachinesLock for no-preload-052000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:39.719666    8318 start.go:364] duration metric: took 313.708µs to acquireMachinesLock for "no-preload-052000"
	I0906 12:36:39.719768    8318 start.go:93] Provisioning new machine with config: &{Name:no-preload-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-05200
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:36:39.720002    8318 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:36:39.731582    8318 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:36:39.780863    8318 start.go:159] libmachine.API.Create for "no-preload-052000" (driver="qemu2")
	I0906 12:36:39.780905    8318 client.go:168] LocalClient.Create starting
	I0906 12:36:39.781021    8318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:36:39.781084    8318 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:39.781104    8318 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:39.781174    8318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:36:39.781219    8318 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:39.781241    8318 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:39.781760    8318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:36:39.954967    8318 main.go:141] libmachine: Creating SSH key...
	I0906 12:36:40.071819    8318 main.go:141] libmachine: Creating Disk image...
	I0906 12:36:40.071830    8318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:36:40.072010    8318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2
	I0906 12:36:40.081581    8318 main.go:141] libmachine: STDOUT: 
	I0906 12:36:40.081601    8318 main.go:141] libmachine: STDERR: 
	I0906 12:36:40.081663    8318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2 +20000M
	I0906 12:36:40.089768    8318 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:36:40.089783    8318 main.go:141] libmachine: STDERR: 
	I0906 12:36:40.089795    8318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2
	I0906 12:36:40.089801    8318 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:36:40.089811    8318 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:40.089855    8318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:96:5d:d2:39:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2
	I0906 12:36:40.091609    8318 main.go:141] libmachine: STDOUT: 
	I0906 12:36:40.091624    8318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:40.091637    8318 client.go:171] duration metric: took 310.729042ms to LocalClient.Create
	I0906 12:36:40.271453    8318 cache.go:157] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0906 12:36:40.271474    8318 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.873383917s
	I0906 12:36:40.271487    8318 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0906 12:36:40.271519    8318 cache.go:87] Successfully saved all images to host disk.
	I0906 12:36:42.093876    8318 start.go:128] duration metric: took 2.373829708s to createHost
	I0906 12:36:42.093976    8318 start.go:83] releasing machines lock for "no-preload-052000", held for 2.374304834s
	W0906 12:36:42.094336    8318 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-052000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-052000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:42.109968    8318 out.go:201] 
	W0906 12:36:42.114022    8318 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:36:42.114067    8318 out.go:270] * 
	* 
	W0906 12:36:42.116569    8318 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:36:42.129887    8318 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (66.807834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-052000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-052000 create -f testdata/busybox.yaml: exit status 1 (29.636625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-052000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-052000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (30.151417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (29.731875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-052000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-052000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-052000 describe deploy/metrics-server -n kube-system: exit status 1 (27.592375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-052000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-052000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (29.956375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.184853666s)

                                                
                                                
-- stdout --
	* [no-preload-052000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-052000" primary control-plane node in "no-preload-052000" cluster
	* Restarting existing qemu2 VM for "no-preload-052000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-052000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:36:46.041118    8396 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:36:46.041262    8396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:46.041265    8396 out.go:358] Setting ErrFile to fd 2...
	I0906 12:36:46.041268    8396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:46.041393    8396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:36:46.042456    8396 out.go:352] Setting JSON to false
	I0906 12:36:46.058615    8396 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5776,"bootTime":1725645630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:36:46.058687    8396 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:36:46.064197    8396 out.go:177] * [no-preload-052000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:36:46.071204    8396 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:36:46.071262    8396 notify.go:220] Checking for updates...
	I0906 12:36:46.078216    8396 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:36:46.081098    8396 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:36:46.084188    8396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:36:46.087196    8396 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:36:46.090202    8396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:36:46.093447    8396 config.go:182] Loaded profile config "no-preload-052000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:36:46.093705    8396 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:36:46.098257    8396 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:36:46.105143    8396 start.go:297] selected driver: qemu2
	I0906 12:36:46.105153    8396 start.go:901] validating driver "qemu2" against &{Name:no-preload-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-052000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:36:46.105203    8396 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:36:46.107537    8396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:36:46.107564    8396 cni.go:84] Creating CNI manager for ""
	I0906 12:36:46.107571    8396 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:36:46.107602    8396 start.go:340] cluster config:
	{Name:no-preload-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-052000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:36:46.111089    8396 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:46.119147    8396 out.go:177] * Starting "no-preload-052000" primary control-plane node in "no-preload-052000" cluster
	I0906 12:36:46.123125    8396 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:36:46.123211    8396 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/no-preload-052000/config.json ...
	I0906 12:36:46.123265    8396 cache.go:107] acquiring lock: {Name:mkbe278f444f97437b24a7e46c9381c5dfe49810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:46.123292    8396 cache.go:107] acquiring lock: {Name:mk246fb06349ed49f91a202a9a119d50d36da8d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:46.123294    8396 cache.go:107] acquiring lock: {Name:mk639270f1193cff7a30ed2fe3dcd65e696ca93c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:46.123343    8396 cache.go:115] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0906 12:36:46.123350    8396 cache.go:115] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0906 12:36:46.123348    8396 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 56.375µs
	I0906 12:36:46.123356    8396 cache.go:115] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0906 12:36:46.123361    8396 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 72µs
	I0906 12:36:46.123362    8396 cache.go:107] acquiring lock: {Name:mka70f64740da40ae49323c80f77dcdada267893 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:46.123362    8396 cache.go:107] acquiring lock: {Name:mkc69174ae4790d7a20bd0ef227627f1992a059d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:46.123355    8396 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 92.625µs
	I0906 12:36:46.123391    8396 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0906 12:36:46.123396    8396 cache.go:115] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0906 12:36:46.123401    8396 cache.go:107] acquiring lock: {Name:mkb46b4bb1822bf822b24ceae979dc93da60fe13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:46.123417    8396 cache.go:115] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0906 12:36:46.123366    8396 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0906 12:36:46.123424    8396 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 62.042µs
	I0906 12:36:46.123427    8396 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0906 12:36:46.123434    8396 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 73.167µs
	I0906 12:36:46.123273    8396 cache.go:107] acquiring lock: {Name:mk76c033ec6d2d818e6e2bca8d18bd154a86e539 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:46.123436    8396 cache.go:115] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0906 12:36:46.123462    8396 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0906 12:36:46.123357    8396 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0906 12:36:46.123465    8396 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 64µs
	I0906 12:36:46.123469    8396 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0906 12:36:46.123265    8396 cache.go:107] acquiring lock: {Name:mkab7a7d4abedf3c4819d7aa829fcdb26da0e508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:46.123507    8396 cache.go:115] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0906 12:36:46.123511    8396 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 249.875µs
	I0906 12:36:46.123518    8396 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0906 12:36:46.123518    8396 cache.go:115] /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 12:36:46.123523    8396 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 260.542µs
	I0906 12:36:46.123527    8396 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 12:36:46.123532    8396 cache.go:87] Successfully saved all images to host disk.
	I0906 12:36:46.123647    8396 start.go:360] acquireMachinesLock for no-preload-052000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:46.123676    8396 start.go:364] duration metric: took 23.292µs to acquireMachinesLock for "no-preload-052000"
	I0906 12:36:46.123686    8396 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:36:46.123690    8396 fix.go:54] fixHost starting: 
	I0906 12:36:46.123810    8396 fix.go:112] recreateIfNeeded on no-preload-052000: state=Stopped err=<nil>
	W0906 12:36:46.123818    8396 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:36:46.132187    8396 out.go:177] * Restarting existing qemu2 VM for "no-preload-052000" ...
	I0906 12:36:46.136012    8396 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:46.136061    8396 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:96:5d:d2:39:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2
	I0906 12:36:46.137980    8396 main.go:141] libmachine: STDOUT: 
	I0906 12:36:46.137996    8396 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:46.138019    8396 fix.go:56] duration metric: took 14.328167ms for fixHost
	I0906 12:36:46.138022    8396 start.go:83] releasing machines lock for "no-preload-052000", held for 14.342583ms
	W0906 12:36:46.138030    8396 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:36:46.138057    8396 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:46.138061    8396 start.go:729] Will try again in 5 seconds ...
	I0906 12:36:51.140203    8396 start.go:360] acquireMachinesLock for no-preload-052000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:51.140628    8396 start.go:364] duration metric: took 332.667µs to acquireMachinesLock for "no-preload-052000"
	I0906 12:36:51.140758    8396 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:36:51.140786    8396 fix.go:54] fixHost starting: 
	I0906 12:36:51.141520    8396 fix.go:112] recreateIfNeeded on no-preload-052000: state=Stopped err=<nil>
	W0906 12:36:51.141560    8396 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:36:51.149709    8396 out.go:177] * Restarting existing qemu2 VM for "no-preload-052000" ...
	I0906 12:36:51.152864    8396 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:51.153055    8396 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:96:5d:d2:39:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/no-preload-052000/disk.qcow2
	I0906 12:36:51.161998    8396 main.go:141] libmachine: STDOUT: 
	I0906 12:36:51.162061    8396 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:51.162149    8396 fix.go:56] duration metric: took 21.365083ms for fixHost
	I0906 12:36:51.162165    8396 start.go:83] releasing machines lock for "no-preload-052000", held for 21.507291ms
	W0906 12:36:51.162390    8396 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-052000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-052000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:51.170774    8396 out.go:201] 
	W0906 12:36:51.173989    8396 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:36:51.174018    8396 out.go:270] * 
	* 
	W0906 12:36:51.176531    8396 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:36:51.184939    8396 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (70.541542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-052000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (32.879833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-052000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-052000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-052000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.611ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-052000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-052000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (30.633791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-052000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (29.812708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-052000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-052000 --alsologtostderr -v=1: exit status 83 (41.670167ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-052000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-052000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:36:51.461652    8417 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:36:51.461801    8417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:51.461804    8417 out.go:358] Setting ErrFile to fd 2...
	I0906 12:36:51.461806    8417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:51.461936    8417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:36:51.462144    8417 out.go:352] Setting JSON to false
	I0906 12:36:51.462152    8417 mustload.go:65] Loading cluster: no-preload-052000
	I0906 12:36:51.462344    8417 config.go:182] Loaded profile config "no-preload-052000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:36:51.466746    8417 out.go:177] * The control-plane node no-preload-052000 host is not running: state=Stopped
	I0906 12:36:51.469786    8417 out.go:177]   To start a cluster, run: "minikube start -p no-preload-052000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-052000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (30.578625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (30.500458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-760000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-760000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.941715417s)

                                                
                                                
-- stdout --
	* [embed-certs-760000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-760000" primary control-plane node in "embed-certs-760000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-760000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:36:51.787287    8434 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:36:51.787414    8434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:51.787417    8434 out.go:358] Setting ErrFile to fd 2...
	I0906 12:36:51.787419    8434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:36:51.787554    8434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:36:51.788636    8434 out.go:352] Setting JSON to false
	I0906 12:36:51.804864    8434 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5781,"bootTime":1725645630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:36:51.804950    8434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:36:51.808797    8434 out.go:177] * [embed-certs-760000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:36:51.815706    8434 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:36:51.815784    8434 notify.go:220] Checking for updates...
	I0906 12:36:51.822730    8434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:36:51.825686    8434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:36:51.828678    8434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:36:51.831744    8434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:36:51.834648    8434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:36:51.837943    8434 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:36:51.838006    8434 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:36:51.838052    8434 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:36:51.842672    8434 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:36:51.849703    8434 start.go:297] selected driver: qemu2
	I0906 12:36:51.849711    8434 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:36:51.849719    8434 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:36:51.852065    8434 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:36:51.854707    8434 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:36:51.857747    8434 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:36:51.857765    8434 cni.go:84] Creating CNI manager for ""
	I0906 12:36:51.857772    8434 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:36:51.857776    8434 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:36:51.857809    8434 start.go:340] cluster config:
	{Name:embed-certs-760000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMn
etPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:36:51.861568    8434 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:36:51.869749    8434 out.go:177] * Starting "embed-certs-760000" primary control-plane node in "embed-certs-760000" cluster
	I0906 12:36:51.873660    8434 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:36:51.873680    8434 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:36:51.873691    8434 cache.go:56] Caching tarball of preloaded images
	I0906 12:36:51.873772    8434 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:36:51.873778    8434 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:36:51.873860    8434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/embed-certs-760000/config.json ...
	I0906 12:36:51.873878    8434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/embed-certs-760000/config.json: {Name:mk140323b024f0dd7c1f7e44c665ffc6a095cc1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:36:51.874127    8434 start.go:360] acquireMachinesLock for embed-certs-760000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:51.874166    8434 start.go:364] duration metric: took 32.75µs to acquireMachinesLock for "embed-certs-760000"
	I0906 12:36:51.874180    8434 start.go:93] Provisioning new machine with config: &{Name:embed-certs-760000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-7600
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:36:51.874227    8434 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:36:51.882668    8434 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:36:51.900628    8434 start.go:159] libmachine.API.Create for "embed-certs-760000" (driver="qemu2")
	I0906 12:36:51.900665    8434 client.go:168] LocalClient.Create starting
	I0906 12:36:51.900743    8434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:36:51.900775    8434 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:51.900789    8434 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:51.900826    8434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:36:51.900850    8434 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:51.900858    8434 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:51.901283    8434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:36:52.063667    8434 main.go:141] libmachine: Creating SSH key...
	I0906 12:36:52.134879    8434 main.go:141] libmachine: Creating Disk image...
	I0906 12:36:52.134886    8434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:36:52.135066    8434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2
	I0906 12:36:52.144196    8434 main.go:141] libmachine: STDOUT: 
	I0906 12:36:52.144215    8434 main.go:141] libmachine: STDERR: 
	I0906 12:36:52.144260    8434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2 +20000M
	I0906 12:36:52.152040    8434 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:36:52.152054    8434 main.go:141] libmachine: STDERR: 
	I0906 12:36:52.152064    8434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2
	I0906 12:36:52.152067    8434 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:36:52.152082    8434 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:52.152107    8434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:54:8a:6a:29:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2
	I0906 12:36:52.153649    8434 main.go:141] libmachine: STDOUT: 
	I0906 12:36:52.153667    8434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:52.153687    8434 client.go:171] duration metric: took 253.018375ms to LocalClient.Create
	I0906 12:36:54.155868    8434 start.go:128] duration metric: took 2.281630375s to createHost
	I0906 12:36:54.155913    8434 start.go:83] releasing machines lock for "embed-certs-760000", held for 2.281754s
	W0906 12:36:54.155971    8434 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:54.166804    8434 out.go:177] * Deleting "embed-certs-760000" in qemu2 ...
	W0906 12:36:54.202242    8434 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:36:54.202266    8434 start.go:729] Will try again in 5 seconds ...
	I0906 12:36:59.204503    8434 start.go:360] acquireMachinesLock for embed-certs-760000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:36:59.204948    8434 start.go:364] duration metric: took 343.291µs to acquireMachinesLock for "embed-certs-760000"
	I0906 12:36:59.205051    8434 start.go:93] Provisioning new machine with config: &{Name:embed-certs-760000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-7600
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:36:59.205334    8434 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:36:59.223063    8434 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:36:59.273475    8434 start.go:159] libmachine.API.Create for "embed-certs-760000" (driver="qemu2")
	I0906 12:36:59.273532    8434 client.go:168] LocalClient.Create starting
	I0906 12:36:59.273627    8434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:36:59.273687    8434 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:59.273705    8434 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:59.273766    8434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:36:59.273817    8434 main.go:141] libmachine: Decoding PEM data...
	I0906 12:36:59.273828    8434 main.go:141] libmachine: Parsing certificate...
	I0906 12:36:59.274502    8434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:36:59.445310    8434 main.go:141] libmachine: Creating SSH key...
	I0906 12:36:59.638561    8434 main.go:141] libmachine: Creating Disk image...
	I0906 12:36:59.638568    8434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:36:59.638774    8434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2
	I0906 12:36:59.648415    8434 main.go:141] libmachine: STDOUT: 
	I0906 12:36:59.648445    8434 main.go:141] libmachine: STDERR: 
	I0906 12:36:59.648487    8434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2 +20000M
	I0906 12:36:59.656444    8434 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:36:59.656460    8434 main.go:141] libmachine: STDERR: 
	I0906 12:36:59.656470    8434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2
	I0906 12:36:59.656476    8434 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:36:59.656488    8434 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:36:59.656519    8434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a4:c0:dd:62:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2
	I0906 12:36:59.658111    8434 main.go:141] libmachine: STDOUT: 
	I0906 12:36:59.658129    8434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:36:59.658142    8434 client.go:171] duration metric: took 384.607458ms to LocalClient.Create
	I0906 12:37:01.660379    8434 start.go:128] duration metric: took 2.45495375s to createHost
	I0906 12:37:01.660436    8434 start.go:83] releasing machines lock for "embed-certs-760000", held for 2.45548275s
	W0906 12:37:01.660829    8434 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-760000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-760000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:01.675528    8434 out.go:201] 
	W0906 12:37:01.678676    8434 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:37:01.678742    8434 out.go:270] * 
	* 
	W0906 12:37:01.681280    8434 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:37:01.687511    8434 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-760000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000: exit status 7 (66.553792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-760000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-760000 create -f testdata/busybox.yaml: exit status 1 (29.373ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-760000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-760000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000: exit status 7 (29.9735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-760000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000: exit status 7 (29.975542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-760000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-760000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-760000 describe deploy/metrics-server -n kube-system: exit status 1 (26.126041ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-760000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-760000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000: exit status 7 (30.321792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-760000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
E0906 12:37:09.134691    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-760000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.172764916s)

                                                
                                                
-- stdout --
	* [embed-certs-760000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-760000" primary control-plane node in "embed-certs-760000" cluster
	* Restarting existing qemu2 VM for "embed-certs-760000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-760000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:37:05.416827    8482 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:37:05.416959    8482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:05.416963    8482 out.go:358] Setting ErrFile to fd 2...
	I0906 12:37:05.416972    8482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:05.417096    8482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:37:05.418085    8482 out.go:352] Setting JSON to false
	I0906 12:37:05.434571    8482 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5795,"bootTime":1725645630,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:37:05.434648    8482 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:37:05.438421    8482 out.go:177] * [embed-certs-760000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:37:05.445453    8482 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:37:05.445522    8482 notify.go:220] Checking for updates...
	I0906 12:37:05.452428    8482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:37:05.455416    8482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:37:05.458410    8482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:37:05.461494    8482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:37:05.464441    8482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:37:05.466206    8482 config.go:182] Loaded profile config "embed-certs-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:37:05.466498    8482 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:37:05.470430    8482 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:37:05.477270    8482 start.go:297] selected driver: qemu2
	I0906 12:37:05.477276    8482 start.go:901] validating driver "qemu2" against &{Name:embed-certs-760000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-760000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:37:05.477339    8482 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:37:05.479603    8482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:37:05.479640    8482 cni.go:84] Creating CNI manager for ""
	I0906 12:37:05.479649    8482 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:37:05.479672    8482 start.go:340] cluster config:
	{Name:embed-certs-760000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:37:05.483317    8482 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:37:05.491472    8482 out.go:177] * Starting "embed-certs-760000" primary control-plane node in "embed-certs-760000" cluster
	I0906 12:37:05.495306    8482 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:37:05.495319    8482 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:37:05.495325    8482 cache.go:56] Caching tarball of preloaded images
	I0906 12:37:05.495382    8482 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:37:05.495387    8482 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:37:05.495435    8482 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/embed-certs-760000/config.json ...
	I0906 12:37:05.495904    8482 start.go:360] acquireMachinesLock for embed-certs-760000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:37:05.495930    8482 start.go:364] duration metric: took 20.833µs to acquireMachinesLock for "embed-certs-760000"
	I0906 12:37:05.495940    8482 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:37:05.495945    8482 fix.go:54] fixHost starting: 
	I0906 12:37:05.496059    8482 fix.go:112] recreateIfNeeded on embed-certs-760000: state=Stopped err=<nil>
	W0906 12:37:05.496068    8482 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:37:05.500473    8482 out.go:177] * Restarting existing qemu2 VM for "embed-certs-760000" ...
	I0906 12:37:05.508414    8482 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:37:05.508459    8482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a4:c0:dd:62:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2
	I0906 12:37:05.510397    8482 main.go:141] libmachine: STDOUT: 
	I0906 12:37:05.510423    8482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:37:05.510461    8482 fix.go:56] duration metric: took 14.515ms for fixHost
	I0906 12:37:05.510466    8482 start.go:83] releasing machines lock for "embed-certs-760000", held for 14.531416ms
	W0906 12:37:05.510473    8482 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:37:05.510508    8482 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:05.510512    8482 start.go:729] Will try again in 5 seconds ...
	I0906 12:37:10.512546    8482 start.go:360] acquireMachinesLock for embed-certs-760000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:37:10.512643    8482 start.go:364] duration metric: took 71.791µs to acquireMachinesLock for "embed-certs-760000"
	I0906 12:37:10.512662    8482 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:37:10.512666    8482 fix.go:54] fixHost starting: 
	I0906 12:37:10.512864    8482 fix.go:112] recreateIfNeeded on embed-certs-760000: state=Stopped err=<nil>
	W0906 12:37:10.512871    8482 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:37:10.516658    8482 out.go:177] * Restarting existing qemu2 VM for "embed-certs-760000" ...
	I0906 12:37:10.524613    8482 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:37:10.524669    8482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a4:c0:dd:62:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/embed-certs-760000/disk.qcow2
	I0906 12:37:10.527067    8482 main.go:141] libmachine: STDOUT: 
	I0906 12:37:10.527101    8482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:37:10.527122    8482 fix.go:56] duration metric: took 14.455875ms for fixHost
	I0906 12:37:10.527128    8482 start.go:83] releasing machines lock for "embed-certs-760000", held for 14.479667ms
	W0906 12:37:10.527170    8482 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-760000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-760000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:10.533621    8482 out.go:201] 
	W0906 12:37:10.537625    8482 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:37:10.537636    8482 out.go:270] * 
	* 
	W0906 12:37:10.538282    8482 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:37:10.552245    8482 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-760000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000: exit status 7 (38.199583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-760000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000: exit status 7 (28.702625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-760000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-760000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-760000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.482ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-760000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-760000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000: exit status 7 (28.875791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-760000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000: exit status 7 (29.6815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-760000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-760000 --alsologtostderr -v=1: exit status 83 (37.785208ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-760000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-760000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:37:10.781177    8508 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:37:10.781339    8508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:10.781345    8508 out.go:358] Setting ErrFile to fd 2...
	I0906 12:37:10.781347    8508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:10.781493    8508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:37:10.781718    8508 out.go:352] Setting JSON to false
	I0906 12:37:10.781725    8508 mustload.go:65] Loading cluster: embed-certs-760000
	I0906 12:37:10.781902    8508 config.go:182] Loaded profile config "embed-certs-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:37:10.783898    8508 out.go:177] * The control-plane node embed-certs-760000 host is not running: state=Stopped
	I0906 12:37:10.787728    8508 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-760000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-760000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000: exit status 7 (29.006083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-760000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000: exit status 7 (29.135375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-760000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-760000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.983845542s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-760000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-760000" primary control-plane node in "default-k8s-diff-port-760000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-760000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:37:11.202618    8532 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:37:11.202762    8532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:11.202765    8532 out.go:358] Setting ErrFile to fd 2...
	I0906 12:37:11.202768    8532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:11.202903    8532 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:37:11.204042    8532 out.go:352] Setting JSON to false
	I0906 12:37:11.220196    8532 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5801,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:37:11.220264    8532 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:37:11.223861    8532 out.go:177] * [default-k8s-diff-port-760000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:37:11.226743    8532 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:37:11.226804    8532 notify.go:220] Checking for updates...
	I0906 12:37:11.233583    8532 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:37:11.236717    8532 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:37:11.239765    8532 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:37:11.242771    8532 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:37:11.245800    8532 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:37:11.249061    8532 config.go:182] Loaded profile config "cert-expiration-051000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:37:11.249131    8532 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:37:11.249180    8532 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:37:11.253732    8532 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:37:11.260717    8532 start.go:297] selected driver: qemu2
	I0906 12:37:11.260725    8532 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:37:11.260733    8532 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:37:11.262909    8532 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:37:11.265730    8532 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:37:11.268781    8532 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:37:11.268811    8532 cni.go:84] Creating CNI manager for ""
	I0906 12:37:11.268818    8532 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:37:11.268822    8532 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:37:11.268845    8532 start.go:340] cluster config:
	{Name:default-k8s-diff-port-760000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_v
mnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:37:11.272421    8532 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:37:11.279576    8532 out.go:177] * Starting "default-k8s-diff-port-760000" primary control-plane node in "default-k8s-diff-port-760000" cluster
	I0906 12:37:11.283774    8532 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:37:11.283790    8532 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:37:11.283799    8532 cache.go:56] Caching tarball of preloaded images
	I0906 12:37:11.283860    8532 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:37:11.283865    8532 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:37:11.283941    8532 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/default-k8s-diff-port-760000/config.json ...
	I0906 12:37:11.283959    8532 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/default-k8s-diff-port-760000/config.json: {Name:mkc663ee223f9676302fd150e56abea69e5f7522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:37:11.284183    8532 start.go:360] acquireMachinesLock for default-k8s-diff-port-760000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:37:11.284221    8532 start.go:364] duration metric: took 27.583µs to acquireMachinesLock for "default-k8s-diff-port-760000"
	I0906 12:37:11.284233    8532 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:defau
lt-k8s-diff-port-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:37:11.284270    8532 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:37:11.288601    8532 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:37:11.305672    8532 start.go:159] libmachine.API.Create for "default-k8s-diff-port-760000" (driver="qemu2")
	I0906 12:37:11.305702    8532 client.go:168] LocalClient.Create starting
	I0906 12:37:11.305768    8532 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:37:11.305797    8532 main.go:141] libmachine: Decoding PEM data...
	I0906 12:37:11.305805    8532 main.go:141] libmachine: Parsing certificate...
	I0906 12:37:11.305840    8532 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:37:11.305862    8532 main.go:141] libmachine: Decoding PEM data...
	I0906 12:37:11.305868    8532 main.go:141] libmachine: Parsing certificate...
	I0906 12:37:11.306208    8532 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:37:11.469499    8532 main.go:141] libmachine: Creating SSH key...
	I0906 12:37:11.546468    8532 main.go:141] libmachine: Creating Disk image...
	I0906 12:37:11.546473    8532 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:37:11.546634    8532 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2
	I0906 12:37:11.555882    8532 main.go:141] libmachine: STDOUT: 
	I0906 12:37:11.555906    8532 main.go:141] libmachine: STDERR: 
	I0906 12:37:11.555951    8532 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2 +20000M
	I0906 12:37:11.563780    8532 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:37:11.563795    8532 main.go:141] libmachine: STDERR: 
	I0906 12:37:11.563808    8532 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2
	I0906 12:37:11.563814    8532 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:37:11.563828    8532 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:37:11.563853    8532 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:7d:5a:01:4f:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2
	I0906 12:37:11.565442    8532 main.go:141] libmachine: STDOUT: 
	I0906 12:37:11.565463    8532 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:37:11.565483    8532 client.go:171] duration metric: took 259.779625ms to LocalClient.Create
	I0906 12:37:13.567674    8532 start.go:128] duration metric: took 2.283401083s to createHost
	I0906 12:37:13.567743    8532 start.go:83] releasing machines lock for "default-k8s-diff-port-760000", held for 2.283529792s
	W0906 12:37:13.567838    8532 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:13.578080    8532 out.go:177] * Deleting "default-k8s-diff-port-760000" in qemu2 ...
	W0906 12:37:13.611211    8532 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:13.611232    8532 start.go:729] Will try again in 5 seconds ...
	I0906 12:37:18.613413    8532 start.go:360] acquireMachinesLock for default-k8s-diff-port-760000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:37:18.613956    8532 start.go:364] duration metric: took 401.542µs to acquireMachinesLock for "default-k8s-diff-port-760000"
	I0906 12:37:18.614108    8532 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:defau
lt-k8s-diff-port-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:37:18.614450    8532 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:37:18.619088    8532 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:37:18.667770    8532 start.go:159] libmachine.API.Create for "default-k8s-diff-port-760000" (driver="qemu2")
	I0906 12:37:18.667815    8532 client.go:168] LocalClient.Create starting
	I0906 12:37:18.667935    8532 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:37:18.667996    8532 main.go:141] libmachine: Decoding PEM data...
	I0906 12:37:18.668012    8532 main.go:141] libmachine: Parsing certificate...
	I0906 12:37:18.668069    8532 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:37:18.668114    8532 main.go:141] libmachine: Decoding PEM data...
	I0906 12:37:18.668128    8532 main.go:141] libmachine: Parsing certificate...
	I0906 12:37:18.668733    8532 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:37:18.862612    8532 main.go:141] libmachine: Creating SSH key...
	I0906 12:37:19.089900    8532 main.go:141] libmachine: Creating Disk image...
	I0906 12:37:19.089908    8532 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:37:19.090138    8532 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2
	I0906 12:37:19.099999    8532 main.go:141] libmachine: STDOUT: 
	I0906 12:37:19.100021    8532 main.go:141] libmachine: STDERR: 
	I0906 12:37:19.100080    8532 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2 +20000M
	I0906 12:37:19.107998    8532 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:37:19.108015    8532 main.go:141] libmachine: STDERR: 
	I0906 12:37:19.108027    8532 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2
	I0906 12:37:19.108039    8532 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:37:19.108048    8532 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:37:19.108083    8532 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a8:f8:a2:f6:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2
	I0906 12:37:19.109693    8532 main.go:141] libmachine: STDOUT: 
	I0906 12:37:19.109710    8532 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:37:19.109724    8532 client.go:171] duration metric: took 441.904917ms to LocalClient.Create
	I0906 12:37:21.111882    8532 start.go:128] duration metric: took 2.497425416s to createHost
	I0906 12:37:21.111932    8532 start.go:83] releasing machines lock for "default-k8s-diff-port-760000", held for 2.497965833s
	W0906 12:37:21.112261    8532 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-760000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-760000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:21.122993    8532 out.go:201] 
	W0906 12:37:21.131082    8532 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:37:21.131113    8532 out.go:270] * 
	* 
	W0906 12:37:21.133646    8532 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:37:21.144002    8532 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-760000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000: exit status 7 (64.990666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-793000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-793000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.849659375s)

                                                
                                                
-- stdout --
	* [newest-cni-793000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-793000" primary control-plane node in "newest-cni-793000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-793000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:37:15.750060    8548 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:37:15.750178    8548 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:15.750182    8548 out.go:358] Setting ErrFile to fd 2...
	I0906 12:37:15.750192    8548 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:15.750348    8548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:37:15.751430    8548 out.go:352] Setting JSON to false
	I0906 12:37:15.767694    8548 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5805,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:37:15.767767    8548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:37:15.773928    8548 out.go:177] * [newest-cni-793000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:37:15.780736    8548 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:37:15.780777    8548 notify.go:220] Checking for updates...
	I0906 12:37:15.787716    8548 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:37:15.790787    8548 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:37:15.793730    8548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:37:15.796734    8548 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:37:15.799737    8548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:37:15.803034    8548 config.go:182] Loaded profile config "default-k8s-diff-port-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:37:15.803101    8548 config.go:182] Loaded profile config "multinode-009000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:37:15.803148    8548 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:37:15.806683    8548 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 12:37:15.813751    8548 start.go:297] selected driver: qemu2
	I0906 12:37:15.813761    8548 start.go:901] validating driver "qemu2" against <nil>
	I0906 12:37:15.813769    8548 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:37:15.816183    8548 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0906 12:37:15.816213    8548 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0906 12:37:15.818682    8548 out.go:177] * Automatically selected the socket_vmnet network
	I0906 12:37:15.825848    8548 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 12:37:15.825879    8548 cni.go:84] Creating CNI manager for ""
	I0906 12:37:15.825892    8548 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:37:15.825897    8548 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:37:15.825924    8548 start.go:340] cluster config:
	{Name:newest-cni-793000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:37:15.829598    8548 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:37:15.836708    8548 out.go:177] * Starting "newest-cni-793000" primary control-plane node in "newest-cni-793000" cluster
	I0906 12:37:15.840724    8548 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:37:15.840742    8548 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:37:15.840751    8548 cache.go:56] Caching tarball of preloaded images
	I0906 12:37:15.840827    8548 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:37:15.840834    8548 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:37:15.840919    8548 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/newest-cni-793000/config.json ...
	I0906 12:37:15.840942    8548 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/newest-cni-793000/config.json: {Name:mkc27f4ecd287e1f316f57e4ff3685f80919d434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:37:15.841163    8548 start.go:360] acquireMachinesLock for newest-cni-793000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:37:15.841209    8548 start.go:364] duration metric: took 39.833µs to acquireMachinesLock for "newest-cni-793000"
	I0906 12:37:15.841222    8548 start.go:93] Provisioning new machine with config: &{Name:newest-cni-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-79300
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:37:15.841265    8548 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:37:15.849757    8548 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:37:15.868561    8548 start.go:159] libmachine.API.Create for "newest-cni-793000" (driver="qemu2")
	I0906 12:37:15.868595    8548 client.go:168] LocalClient.Create starting
	I0906 12:37:15.868667    8548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:37:15.868700    8548 main.go:141] libmachine: Decoding PEM data...
	I0906 12:37:15.868709    8548 main.go:141] libmachine: Parsing certificate...
	I0906 12:37:15.868747    8548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:37:15.868771    8548 main.go:141] libmachine: Decoding PEM data...
	I0906 12:37:15.868779    8548 main.go:141] libmachine: Parsing certificate...
	I0906 12:37:15.869142    8548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:37:16.030686    8548 main.go:141] libmachine: Creating SSH key...
	I0906 12:37:16.141403    8548 main.go:141] libmachine: Creating Disk image...
	I0906 12:37:16.141408    8548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:37:16.141578    8548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2
	I0906 12:37:16.150938    8548 main.go:141] libmachine: STDOUT: 
	I0906 12:37:16.150956    8548 main.go:141] libmachine: STDERR: 
	I0906 12:37:16.151009    8548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2 +20000M
	I0906 12:37:16.158826    8548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:37:16.158848    8548 main.go:141] libmachine: STDERR: 
	I0906 12:37:16.158862    8548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2
	I0906 12:37:16.158870    8548 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:37:16.158888    8548 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:37:16.158919    8548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:2a:11:95:ee:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2
	I0906 12:37:16.160538    8548 main.go:141] libmachine: STDOUT: 
	I0906 12:37:16.160556    8548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:37:16.160575    8548 client.go:171] duration metric: took 291.977125ms to LocalClient.Create
	I0906 12:37:18.162761    8548 start.go:128] duration metric: took 2.321491167s to createHost
	I0906 12:37:18.162823    8548 start.go:83] releasing machines lock for "newest-cni-793000", held for 2.321620417s
	W0906 12:37:18.162870    8548 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:18.175732    8548 out.go:177] * Deleting "newest-cni-793000" in qemu2 ...
	W0906 12:37:18.206778    8548 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:18.206803    8548 start.go:729] Will try again in 5 seconds ...
	I0906 12:37:23.208932    8548 start.go:360] acquireMachinesLock for newest-cni-793000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:37:23.209309    8548 start.go:364] duration metric: took 306.5µs to acquireMachinesLock for "newest-cni-793000"
	I0906 12:37:23.209457    8548 start.go:93] Provisioning new machine with config: &{Name:newest-cni-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-79300
0 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:37:23.209691    8548 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 12:37:23.218452    8548 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 12:37:23.268255    8548 start.go:159] libmachine.API.Create for "newest-cni-793000" (driver="qemu2")
	I0906 12:37:23.268306    8548 client.go:168] LocalClient.Create starting
	I0906 12:37:23.268411    8548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/ca.pem
	I0906 12:37:23.268463    8548 main.go:141] libmachine: Decoding PEM data...
	I0906 12:37:23.268481    8548 main.go:141] libmachine: Parsing certificate...
	I0906 12:37:23.268540    8548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-2143/.minikube/certs/cert.pem
	I0906 12:37:23.268569    8548 main.go:141] libmachine: Decoding PEM data...
	I0906 12:37:23.268581    8548 main.go:141] libmachine: Parsing certificate...
	I0906 12:37:23.269175    8548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso...
	I0906 12:37:23.447293    8548 main.go:141] libmachine: Creating SSH key...
	I0906 12:37:23.505704    8548 main.go:141] libmachine: Creating Disk image...
	I0906 12:37:23.505711    8548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 12:37:23.505885    8548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2.raw /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2
	I0906 12:37:23.514842    8548 main.go:141] libmachine: STDOUT: 
	I0906 12:37:23.514863    8548 main.go:141] libmachine: STDERR: 
	I0906 12:37:23.514917    8548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2 +20000M
	I0906 12:37:23.522739    8548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 12:37:23.522765    8548 main.go:141] libmachine: STDERR: 
	I0906 12:37:23.522781    8548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2
	I0906 12:37:23.522785    8548 main.go:141] libmachine: Starting QEMU VM...
	I0906 12:37:23.522793    8548 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:37:23.522825    8548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:89:8f:2c:8b:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2
	I0906 12:37:23.524487    8548 main.go:141] libmachine: STDOUT: 
	I0906 12:37:23.524505    8548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:37:23.524517    8548 client.go:171] duration metric: took 256.208458ms to LocalClient.Create
	I0906 12:37:25.526709    8548 start.go:128] duration metric: took 2.316982833s to createHost
	I0906 12:37:25.526792    8548 start.go:83] releasing machines lock for "newest-cni-793000", held for 2.317474042s
	W0906 12:37:25.527028    8548 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-793000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-793000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:25.540603    8548 out.go:201] 
	W0906 12:37:25.547731    8548 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:37:25.547757    8548 out.go:270] * 
	* 
	W0906 12:37:25.550328    8548 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:37:25.558527    8548 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-793000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000: exit status 7 (63.730625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-760000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-760000 create -f testdata/busybox.yaml: exit status 1 (29.730541ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-760000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-760000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000: exit status 7 (29.198208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-760000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000: exit status 7 (29.359292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-760000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-760000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-760000 describe deploy/metrics-server -n kube-system: exit status 1 (26.722084ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-760000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-760000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000: exit status 7 (28.829166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-760000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-760000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.707864708s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-760000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-760000" primary control-plane node in "default-k8s-diff-port-760000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-760000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-760000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:37:24.942314    8600 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:37:24.942447    8600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:24.942450    8600 out.go:358] Setting ErrFile to fd 2...
	I0906 12:37:24.942452    8600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:24.942583    8600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:37:24.943570    8600 out.go:352] Setting JSON to false
	I0906 12:37:24.959766    8600 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5814,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:37:24.959831    8600 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:37:24.963876    8600 out.go:177] * [default-k8s-diff-port-760000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:37:24.969864    8600 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:37:24.969959    8600 notify.go:220] Checking for updates...
	I0906 12:37:24.976850    8600 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:37:24.979854    8600 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:37:24.982903    8600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:37:24.985846    8600 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:37:24.988876    8600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:37:24.992172    8600 config.go:182] Loaded profile config "default-k8s-diff-port-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:37:24.992429    8600 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:37:24.996865    8600 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:37:25.003744    8600 start.go:297] selected driver: qemu2
	I0906 12:37:25.003750    8600 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-
k8s-diff-port-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:37:25.003805    8600 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:37:25.006046    8600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:37:25.006073    8600 cni.go:84] Creating CNI manager for ""
	I0906 12:37:25.006080    8600 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:37:25.006107    8600 start.go:340] cluster config:
	{Name:default-k8s-diff-port-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:37:25.009568    8600 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:37:25.016647    8600 out.go:177] * Starting "default-k8s-diff-port-760000" primary control-plane node in "default-k8s-diff-port-760000" cluster
	I0906 12:37:25.020853    8600 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:37:25.020873    8600 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:37:25.020885    8600 cache.go:56] Caching tarball of preloaded images
	I0906 12:37:25.020961    8600 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:37:25.020967    8600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:37:25.021031    8600 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/default-k8s-diff-port-760000/config.json ...
	I0906 12:37:25.021476    8600 start.go:360] acquireMachinesLock for default-k8s-diff-port-760000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:37:25.526917    8600 start.go:364] duration metric: took 505.397834ms to acquireMachinesLock for "default-k8s-diff-port-760000"
	I0906 12:37:25.527130    8600 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:37:25.527156    8600 fix.go:54] fixHost starting: 
	I0906 12:37:25.527818    8600 fix.go:112] recreateIfNeeded on default-k8s-diff-port-760000: state=Stopped err=<nil>
	W0906 12:37:25.527866    8600 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:37:25.543543    8600 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-760000" ...
	I0906 12:37:25.548953    8600 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:37:25.549178    8600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a8:f8:a2:f6:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2
	I0906 12:37:25.558984    8600 main.go:141] libmachine: STDOUT: 
	I0906 12:37:25.559064    8600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:37:25.559194    8600 fix.go:56] duration metric: took 32.040084ms for fixHost
	I0906 12:37:25.559213    8600 start.go:83] releasing machines lock for "default-k8s-diff-port-760000", held for 32.216584ms
	W0906 12:37:25.559252    8600 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:37:25.559414    8600 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:25.559432    8600 start.go:729] Will try again in 5 seconds ...
	I0906 12:37:30.561649    8600 start.go:360] acquireMachinesLock for default-k8s-diff-port-760000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:37:30.562125    8600 start.go:364] duration metric: took 349.375µs to acquireMachinesLock for "default-k8s-diff-port-760000"
	I0906 12:37:30.562268    8600 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:37:30.562289    8600 fix.go:54] fixHost starting: 
	I0906 12:37:30.562956    8600 fix.go:112] recreateIfNeeded on default-k8s-diff-port-760000: state=Stopped err=<nil>
	W0906 12:37:30.562986    8600 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:37:30.572584    8600 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-760000" ...
	I0906 12:37:30.576591    8600 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:37:30.576786    8600 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a8:f8:a2:f6:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/default-k8s-diff-port-760000/disk.qcow2
	I0906 12:37:30.585952    8600 main.go:141] libmachine: STDOUT: 
	I0906 12:37:30.586382    8600 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:37:30.586493    8600 fix.go:56] duration metric: took 24.204375ms for fixHost
	I0906 12:37:30.586515    8600 start.go:83] releasing machines lock for "default-k8s-diff-port-760000", held for 24.368875ms
	W0906 12:37:30.586705    8600 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-760000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-760000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:30.594547    8600 out.go:201] 
	W0906 12:37:30.598658    8600 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:37:30.598704    8600 out.go:270] * 
	* 
	W0906 12:37:30.601499    8600 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:37:30.609606    8600 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-760000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000: exit status 7 (67.516958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-793000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-793000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.177429292s)

                                                
                                                
-- stdout --
	* [newest-cni-793000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-793000" primary control-plane node in "newest-cni-793000" cluster
	* Restarting existing qemu2 VM for "newest-cni-793000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-793000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:37:27.811743    8627 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:37:27.811867    8627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:27.811870    8627 out.go:358] Setting ErrFile to fd 2...
	I0906 12:37:27.811872    8627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:27.811991    8627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:37:27.812990    8627 out.go:352] Setting JSON to false
	I0906 12:37:27.829088    8627 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5817,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 12:37:27.829157    8627 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:37:27.833968    8627 out.go:177] * [newest-cni-793000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 12:37:27.839939    8627 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:37:27.839996    8627 notify.go:220] Checking for updates...
	I0906 12:37:27.846978    8627 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 12:37:27.849884    8627 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 12:37:27.852915    8627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:37:27.855959    8627 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 12:37:27.857230    8627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:37:27.860228    8627 config.go:182] Loaded profile config "newest-cni-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:37:27.860500    8627 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:37:27.864933    8627 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 12:37:27.869883    8627 start.go:297] selected driver: qemu2
	I0906 12:37:27.869890    8627 start.go:901] validating driver "qemu2" against &{Name:newest-cni-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-793000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:37:27.869933    8627 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:37:27.872097    8627 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 12:37:27.872140    8627 cni.go:84] Creating CNI manager for ""
	I0906 12:37:27.872149    8627 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:37:27.872179    8627 start.go:340] cluster config:
	{Name:newest-cni-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:37:27.875743    8627 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:37:27.882901    8627 out.go:177] * Starting "newest-cni-793000" primary control-plane node in "newest-cni-793000" cluster
	I0906 12:37:27.886854    8627 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:37:27.886867    8627 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 12:37:27.886878    8627 cache.go:56] Caching tarball of preloaded images
	I0906 12:37:27.886921    8627 preload.go:172] Found /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 12:37:27.886926    8627 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:37:27.886979    8627 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/newest-cni-793000/config.json ...
	I0906 12:37:27.887412    8627 start.go:360] acquireMachinesLock for newest-cni-793000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:37:27.887438    8627 start.go:364] duration metric: took 20.75µs to acquireMachinesLock for "newest-cni-793000"
	I0906 12:37:27.887448    8627 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:37:27.887455    8627 fix.go:54] fixHost starting: 
	I0906 12:37:27.887567    8627 fix.go:112] recreateIfNeeded on newest-cni-793000: state=Stopped err=<nil>
	W0906 12:37:27.887575    8627 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:37:27.895941    8627 out.go:177] * Restarting existing qemu2 VM for "newest-cni-793000" ...
	I0906 12:37:27.899967    8627 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:37:27.900017    8627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:89:8f:2c:8b:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2
	I0906 12:37:27.902055    8627 main.go:141] libmachine: STDOUT: 
	I0906 12:37:27.902079    8627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:37:27.902111    8627 fix.go:56] duration metric: took 14.657375ms for fixHost
	I0906 12:37:27.902116    8627 start.go:83] releasing machines lock for "newest-cni-793000", held for 14.673542ms
	W0906 12:37:27.902122    8627 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:37:27.902161    8627 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:27.902166    8627 start.go:729] Will try again in 5 seconds ...
	I0906 12:37:32.904288    8627 start.go:360] acquireMachinesLock for newest-cni-793000: {Name:mkbe80253a0710586408e6b826b5cbe2a87244da Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:37:32.904687    8627 start.go:364] duration metric: took 313.625µs to acquireMachinesLock for "newest-cni-793000"
	I0906 12:37:32.904827    8627 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:37:32.904848    8627 fix.go:54] fixHost starting: 
	I0906 12:37:32.905637    8627 fix.go:112] recreateIfNeeded on newest-cni-793000: state=Stopped err=<nil>
	W0906 12:37:32.905664    8627 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:37:32.910211    8627 out.go:177] * Restarting existing qemu2 VM for "newest-cni-793000" ...
	I0906 12:37:32.917116    8627 qemu.go:418] Using hvf for hardware acceleration
	I0906 12:37:32.917280    8627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:89:8f:2c:8b:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19576-2143/.minikube/machines/newest-cni-793000/disk.qcow2
	I0906 12:37:32.926849    8627 main.go:141] libmachine: STDOUT: 
	I0906 12:37:32.926945    8627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 12:37:32.927033    8627 fix.go:56] duration metric: took 22.188917ms for fixHost
	I0906 12:37:32.927050    8627 start.go:83] releasing machines lock for "newest-cni-793000", held for 22.341875ms
	W0906 12:37:32.927224    8627 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-793000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-793000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 12:37:32.935048    8627 out.go:201] 
	W0906 12:37:32.938105    8627 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 12:37:32.938129    8627 out.go:270] * 
	* 
	W0906 12:37:32.940166    8627 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:37:32.948085    8627 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-793000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000: exit status 7 (70.801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-760000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000: exit status 7 (31.570167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-760000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-760000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-760000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.667292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-760000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-760000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000: exit status 7 (29.207625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-760000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000: exit status 7 (29.186958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-760000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-760000 --alsologtostderr -v=1: exit status 83 (41.048875ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-760000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-760000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:37:30.876918    8646 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:37:30.877074    8646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:30.877077    8646 out.go:358] Setting ErrFile to fd 2...
	I0906 12:37:30.877079    8646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:30.877201    8646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:37:30.877415    8646 out.go:352] Setting JSON to false
	I0906 12:37:30.877422    8646 mustload.go:65] Loading cluster: default-k8s-diff-port-760000
	I0906 12:37:30.877604    8646 config.go:182] Loaded profile config "default-k8s-diff-port-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:37:30.881911    8646 out.go:177] * The control-plane node default-k8s-diff-port-760000 host is not running: state=Stopped
	I0906 12:37:30.885860    8646 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-760000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-760000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000: exit status 7 (29.508083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-760000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000: exit status 7 (29.559834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-760000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-793000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000: exit status 7 (29.316666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-793000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-793000 --alsologtostderr -v=1: exit status 83 (42.218083ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-793000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-793000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:37:33.137656    8672 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:37:33.137802    8672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:33.137806    8672 out.go:358] Setting ErrFile to fd 2...
	I0906 12:37:33.137808    8672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:33.137946    8672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 12:37:33.138175    8672 out.go:352] Setting JSON to false
	I0906 12:37:33.138182    8672 mustload.go:65] Loading cluster: newest-cni-793000
	I0906 12:37:33.138387    8672 config.go:182] Loaded profile config "newest-cni-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:37:33.142198    8672 out.go:177] * The control-plane node newest-cni-793000 host is not running: state=Stopped
	I0906 12:37:33.146147    8672 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-793000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-793000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000: exit status 7 (30.681625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-793000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000: exit status 7 (30.066042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (153/270)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 9.49
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.35
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 199.17
29 TestAddons/serial/Volcano 39.5
31 TestAddons/serial/GCPAuth/Namespaces 0.08
34 TestAddons/parallel/Ingress 17.42
35 TestAddons/parallel/InspektorGadget 10.33
36 TestAddons/parallel/MetricsServer 5.3
39 TestAddons/parallel/CSI 43.01
40 TestAddons/parallel/Headlamp 18.63
41 TestAddons/parallel/CloudSpanner 5.22
42 TestAddons/parallel/LocalPath 40.96
43 TestAddons/parallel/NvidiaDevicePlugin 6.16
44 TestAddons/parallel/Yakd 10.2
45 TestAddons/StoppedEnableDisable 12.4
53 TestHyperKitDriverInstallOrUpdate 11.01
56 TestErrorSpam/setup 36.44
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.72
60 TestErrorSpam/unpause 0.63
61 TestErrorSpam/stop 64.33
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 45.94
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.38
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.85
73 TestFunctional/serial/CacheCmd/cache/add_local 1.16
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.81
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.01
81 TestFunctional/serial/ExtraConfig 37.03
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 0.64
84 TestFunctional/serial/LogsFileCmd 0.63
85 TestFunctional/serial/InvalidService 3.61
87 TestFunctional/parallel/ConfigCmd 0.22
88 TestFunctional/parallel/DashboardCmd 9.62
89 TestFunctional/parallel/DryRun 0.26
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.26
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 26.04
99 TestFunctional/parallel/SSHCmd 0.14
100 TestFunctional/parallel/CpCmd 0.43
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.41
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.16
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.81
119 TestFunctional/parallel/ImageCommands/Setup 1.72
120 TestFunctional/parallel/DockerEnv/bash 0.28
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.11
137 TestFunctional/parallel/ServiceCmd/List 0.12
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
140 TestFunctional/parallel/ServiceCmd/Format 0.1
141 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
149 TestFunctional/parallel/ProfileCmd/profile_list 0.12
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
151 TestFunctional/parallel/MountCmd/any-port 5.48
152 TestFunctional/parallel/MountCmd/specific-port 0.88
153 TestFunctional/parallel/MountCmd/VerifyCleanup 0.72
154 TestFunctional/delete_echo-server_images 0.07
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 177.66
161 TestMultiControlPlane/serial/DeployApp 5.08
162 TestMultiControlPlane/serial/PingHostFromPods 0.73
163 TestMultiControlPlane/serial/AddWorkerNode 54.69
164 TestMultiControlPlane/serial/NodeLabels 0.16
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.26
166 TestMultiControlPlane/serial/CopyFile 4.3
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.07
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 3.03
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.21
208 TestMainNoArgs 0.03
253 TestStoppedBinaryUpgrade/Setup 4.7
264 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
271 TestNoKubernetes/serial/ProfileList 15.84
272 TestNoKubernetes/serial/Stop 3.06
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
290 TestStartStop/group/old-k8s-version/serial/Stop 3.62
291 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
301 TestStartStop/group/no-preload/serial/Stop 3.47
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
312 TestStartStop/group/embed-certs/serial/Stop 3.3
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.36
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
330 TestStartStop/group/newest-cni/serial/Stop 1.96
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-666000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-666000: exit status 85 (97.56275ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-666000 | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT |          |
	|         | -p download-only-666000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 11:28:35
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 11:28:35.476896    2674 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:28:35.477015    2674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:28:35.477019    2674 out.go:358] Setting ErrFile to fd 2...
	I0906 11:28:35.477021    2674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:28:35.477143    2674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	W0906 11:28:35.477205    2674 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19576-2143/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19576-2143/.minikube/config/config.json: no such file or directory
	I0906 11:28:35.478548    2674 out.go:352] Setting JSON to true
	I0906 11:28:35.495999    2674 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1685,"bootTime":1725645630,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 11:28:35.496067    2674 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:28:35.501510    2674 out.go:97] [download-only-666000] minikube v1.34.0 on Darwin 14.5 (arm64)
	W0906 11:28:35.501697    2674 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 11:28:35.501706    2674 notify.go:220] Checking for updates...
	I0906 11:28:35.504481    2674 out.go:169] MINIKUBE_LOCATION=19576
	I0906 11:28:35.507407    2674 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 11:28:35.511495    2674 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 11:28:35.514523    2674 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:28:35.517428    2674 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	W0906 11:28:35.523472    2674 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 11:28:35.523681    2674 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:28:35.527456    2674 out.go:97] Using the qemu2 driver based on user configuration
	I0906 11:28:35.527473    2674 start.go:297] selected driver: qemu2
	I0906 11:28:35.527487    2674 start.go:901] validating driver "qemu2" against <nil>
	I0906 11:28:35.527556    2674 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 11:28:35.531482    2674 out.go:169] Automatically selected the socket_vmnet network
	I0906 11:28:35.537192    2674 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0906 11:28:35.537270    2674 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 11:28:35.537301    2674 cni.go:84] Creating CNI manager for ""
	I0906 11:28:35.537317    2674 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 11:28:35.537381    2674 start.go:340] cluster config:
	{Name:download-only-666000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0906 11:28:35.542964    2674 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:28:35.547446    2674 out.go:97] Downloading VM boot image ...
	I0906 11:28:35.547463    2674 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/iso/arm64/minikube-v1.34.0-arm64.iso
	I0906 11:28:40.745719    2674 out.go:97] Starting "download-only-666000" primary control-plane node in "download-only-666000" cluster
	I0906 11:28:40.745754    2674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 11:28:40.810034    2674 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0906 11:28:40.810061    2674 cache.go:56] Caching tarball of preloaded images
	I0906 11:28:40.810264    2674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 11:28:40.815333    2674 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0906 11:28:40.815340    2674 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 11:28:40.916000    2674 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0906 11:28:51.066846    2674 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 11:28:51.067032    2674 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 11:28:51.763516    2674 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0906 11:28:51.763717    2674 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/download-only-666000/config.json ...
	I0906 11:28:51.763748    2674 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/download-only-666000/config.json: {Name:mkac9a06d5758b5208f0be2aba6ce4f44041b623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:28:51.763997    2674 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 11:28:51.764170    2674 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0906 11:28:52.311420    2674 out.go:193] 
	W0906 11:28:52.317558    2674 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19576-2143/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10582f960 0x10582f960 0x10582f960 0x10582f960 0x10582f960 0x10582f960 0x10582f960] Decompressors:map[bz2:0x140005e1dd0 gz:0x140005e1dd8 tar:0x140005e1d80 tar.bz2:0x140005e1d90 tar.gz:0x140005e1da0 tar.xz:0x140005e1db0 tar.zst:0x140005e1dc0 tbz2:0x140005e1d90 tgz:0x140005e1da0 txz:0x140005e1db0 tzst:0x140005e1dc0 xz:0x140005e1de0 zip:0x140005e1df0 zst:0x140005e1de8] Getters:map[file:0x140005e2600 http:0x1400056c190 https:0x1400056c1e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0906 11:28:52.317586    2674 out_reason.go:110] 
	W0906 11:28:52.325426    2674 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 11:28:52.328451    2674 out.go:193] 
	
	
	* The control-plane node download-only-666000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-666000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-666000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (9.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-782000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-782000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (9.488977333s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (9.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-782000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-782000: exit status 85 (76.588334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-666000 | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT |                     |
	|         | -p download-only-666000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT | 06 Sep 24 11:28 PDT |
	| delete  | -p download-only-666000        | download-only-666000 | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT | 06 Sep 24 11:28 PDT |
	| start   | -o=json --download-only        | download-only-782000 | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT |                     |
	|         | -p download-only-782000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 11:28:52
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 11:28:52.745654    2712 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:28:52.745775    2712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:28:52.745779    2712 out.go:358] Setting ErrFile to fd 2...
	I0906 11:28:52.745781    2712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:28:52.745901    2712 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 11:28:52.746939    2712 out.go:352] Setting JSON to true
	I0906 11:28:52.763054    2712 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1702,"bootTime":1725645630,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 11:28:52.763122    2712 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:28:52.768435    2712 out.go:97] [download-only-782000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 11:28:52.768563    2712 notify.go:220] Checking for updates...
	I0906 11:28:52.772961    2712 out.go:169] MINIKUBE_LOCATION=19576
	I0906 11:28:52.774376    2712 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 11:28:52.778914    2712 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 11:28:52.781984    2712 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:28:52.783420    2712 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	W0906 11:28:52.789931    2712 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 11:28:52.790071    2712 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:28:52.792935    2712 out.go:97] Using the qemu2 driver based on user configuration
	I0906 11:28:52.792942    2712 start.go:297] selected driver: qemu2
	I0906 11:28:52.792945    2712 start.go:901] validating driver "qemu2" against <nil>
	I0906 11:28:52.792979    2712 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 11:28:52.796959    2712 out.go:169] Automatically selected the socket_vmnet network
	I0906 11:28:52.803058    2712 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0906 11:28:52.803167    2712 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 11:28:52.803217    2712 cni.go:84] Creating CNI manager for ""
	I0906 11:28:52.803225    2712 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:28:52.803238    2712 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 11:28:52.803290    2712 start.go:340] cluster config:
	{Name:download-only-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s}
	I0906 11:28:52.806617    2712 iso.go:125] acquiring lock: {Name:mka4eda78e1e7ac837be77111a81a2690077622c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:28:52.809932    2712 out.go:97] Starting "download-only-782000" primary control-plane node in "download-only-782000" cluster
	I0906 11:28:52.809940    2712 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:28:52.877584    2712 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 11:28:52.877600    2712 cache.go:56] Caching tarball of preloaded images
	I0906 11:28:52.877794    2712 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:28:52.881166    2712 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0906 11:28:52.881175    2712 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 11:28:52.967257    2712 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19576-2143/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-782000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-782000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-782000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-065000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-065000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-065000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-439000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-439000: exit status 85 (56.1435ms)

                                                
                                                
-- stdout --
	* Profile "addons-439000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-439000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-439000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-439000: exit status 85 (52.221167ms)

                                                
                                                
-- stdout --
	* Profile "addons-439000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-439000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (199.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-439000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-439000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m19.171878208s)
--- PASS: TestAddons/Setup (199.17s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.5s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 7.753042ms
addons_test.go:905: volcano-admission stabilized in 7.782125ms
addons_test.go:913: volcano-controller stabilized in 7.81675ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-7d8xm" [a9af1136-6819-492e-8ca4-9e606086cd6f] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005779333s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-t8rxg" [03d17627-9dab-4cb5-8d5f-ee2264ad91b8] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.008952375s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-2ts22" [98d52cef-9fbd-4318-a433-30f5eecd3249] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.007968458s
addons_test.go:932: (dbg) Run:  kubectl --context addons-439000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-439000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-439000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [beb9b229-eae7-4b4d-a47c-518507d9feef] Pending
helpers_test.go:344: "test-job-nginx-0" [beb9b229-eae7-4b4d-a47c-518507d9feef] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [beb9b229-eae7-4b4d-a47c-518507d9feef] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.005246833s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable volcano --alsologtostderr -v=1: (10.23279125s)
--- PASS: TestAddons/serial/Volcano (39.50s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-439000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-439000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-439000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-439000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-439000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [384e8381-ab29-4ae2-a3c7-50a15e6cbc2e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [384e8381-ab29-4ae2-a3c7-50a15e6cbc2e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.009231834s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-439000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable ingress --alsologtostderr -v=1: (7.249078292s)
--- PASS: TestAddons/parallel/Ingress (17.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-722vt" [429371b3-e0e7-416d-8ec4-57bb4ba49aa0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012691083s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-439000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-439000: (5.32069725s)
--- PASS: TestAddons/parallel/InspektorGadget (10.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.299584ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-fjw2z" [7946e3d2-10e1-49f2-a6ba-3c9e7340a22e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01173325s
addons_test.go:417: (dbg) Run:  kubectl --context addons-439000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.541083ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-439000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-439000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f484d6c4-6ce2-48c1-a474-fbb4bbef9cb6] Pending
helpers_test.go:344: "task-pv-pod" [f484d6c4-6ce2-48c1-a474-fbb4bbef9cb6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f484d6c4-6ce2-48c1-a474-fbb4bbef9cb6] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.008578875s
addons_test.go:590: (dbg) Run:  kubectl --context addons-439000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-439000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-439000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-439000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-439000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-439000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-439000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bc0dbd30-c53e-48fe-bfc3-39cc931da65f] Pending
helpers_test.go:344: "task-pv-pod-restore" [bc0dbd30-c53e-48fe-bfc3-39cc931da65f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bc0dbd30-c53e-48fe-bfc3-39cc931da65f] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003765417s
addons_test.go:632: (dbg) Run:  kubectl --context addons-439000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-439000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-439000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.134030667s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-439000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-lq62d" [fb1efd5c-cd55-4762-b619-18f987601921] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-lq62d" [fb1efd5c-cd55-4762-b619-18f987601921] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004769958s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable headlamp --alsologtostderr -v=1: (5.242581917s)
--- PASS: TestAddons/parallel/Headlamp (18.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-zllxf" [6a4b088e-1a62-425b-868a-33bc6402dc96] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0096135s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-439000
--- PASS: TestAddons/parallel/CloudSpanner (5.22s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.96s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-439000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-439000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c852f620-5b4f-4c8d-96fc-d6d03a256e29] Pending
helpers_test.go:344: "test-local-path" [c852f620-5b4f-4c8d-96fc-d6d03a256e29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c852f620-5b4f-4c8d-96fc-d6d03a256e29] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c852f620-5b4f-4c8d-96fc-d6d03a256e29] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.010979542s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-439000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 ssh "cat /opt/local-path-provisioner/pvc-039ddcae-bbc0-4a3f-b471-cdbc9265f9d3_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-439000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-439000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.438895292s)
--- PASS: TestAddons/parallel/LocalPath (40.96s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nlzkn" [1150cb9e-9cc1-4002-99b8-1f2bafe93a02] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004783333s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-439000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-j6tdc" [9581dadb-dc08-4125-af37-49c67c7872d0] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003914667s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-439000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-439000 addons disable yakd --alsologtostderr -v=1: (5.197021792s)
--- PASS: TestAddons/parallel/Yakd (10.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-439000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-439000: (12.208239333s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-439000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-439000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-439000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.01s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.01s)

                                                
                                    
x
+
TestErrorSpam/setup (36.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-651000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-651000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 --driver=qemu2 : (36.43553175s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (36.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 pause
--- PASS: TestErrorSpam/pause (0.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (64.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 stop: (12.204995375s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 stop: (26.056614208s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-651000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-651000 stop: (26.063285375s)
--- PASS: TestErrorSpam/stop (64.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19576-2143/.minikube/files/etc/test/nested/copy/2672/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-152000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-152000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (45.940342583s)
--- PASS: TestFunctional/serial/StartWithProxy (45.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-152000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-152000 --alsologtostderr -v=8: (38.380588125s)
functional_test.go:663: soft start took 38.381066166s for "functional-152000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-152000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-152000 cache add registry.k8s.io/pause:3.1: (1.124394791s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3506246978/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 cache add minikube-local-cache-test:functional-152000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 cache delete minikube-local-cache-test:functional-152000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-152000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-152000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (72.018833ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 kubectl -- --context functional-152000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.81s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-152000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-152000 get pods: (1.007729125s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-152000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-152000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.031521s)
functional_test.go:761: restart took 37.031634208s for "functional-152000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-152000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd4132213815/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.61s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-152000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-152000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-152000: exit status 115 (150.985375ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30546 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-152000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-152000 config get cpus: exit status 14 (29.549292ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-152000 config get cpus: exit status 14 (31.338666ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-152000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-152000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4413: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-152000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-152000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (147.186833ms)

                                                
                                                
-- stdout --
	* [functional-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 11:47:52.899195    4378 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:47:52.899341    4378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:47:52.899346    4378 out.go:358] Setting ErrFile to fd 2...
	I0906 11:47:52.899348    4378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:47:52.899476    4378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 11:47:52.903157    4378 out.go:352] Setting JSON to false
	I0906 11:47:52.921705    4378 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2842,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 11:47:52.921796    4378 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:47:52.926583    4378 out.go:177] * [functional-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0906 11:47:52.936856    4378 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 11:47:52.936903    4378 notify.go:220] Checking for updates...
	I0906 11:47:52.944779    4378 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 11:47:52.954750    4378 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 11:47:52.964734    4378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:47:52.975736    4378 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 11:47:52.978779    4378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 11:47:52.982079    4378 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:47:52.982316    4378 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:47:52.986766    4378 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 11:47:52.993730    4378 start.go:297] selected driver: qemu2
	I0906 11:47:52.993735    4378 start.go:901] validating driver "qemu2" against &{Name:functional-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-152000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:47:52.993776    4378 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 11:47:53.000791    4378 out.go:201] 
	W0906 11:47:53.004759    4378 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 11:47:53.008739    4378 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-152000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-152000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-152000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.066291ms)

                                                
                                                
-- stdout --
	* [functional-152000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 11:47:53.148364    4389 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:47:53.148475    4389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:47:53.148478    4389 out.go:358] Setting ErrFile to fd 2...
	I0906 11:47:53.148481    4389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:47:53.148614    4389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
	I0906 11:47:53.149972    4389 out.go:352] Setting JSON to false
	I0906 11:47:53.167136    4389 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2843,"bootTime":1725645630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0906 11:47:53.167228    4389 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:47:53.171851    4389 out.go:177] * [functional-152000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0906 11:47:53.178760    4389 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 11:47:53.178827    4389 notify.go:220] Checking for updates...
	I0906 11:47:53.185801    4389 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	I0906 11:47:53.188785    4389 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 11:47:53.191796    4389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:47:53.194790    4389 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	I0906 11:47:53.197839    4389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 11:47:53.201168    4389 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:47:53.201453    4389 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:47:53.205781    4389 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0906 11:47:53.212759    4389 start.go:297] selected driver: qemu2
	I0906 11:47:53.212765    4389 start.go:901] validating driver "qemu2" against &{Name:functional-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-152000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:47:53.212809    4389 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 11:47:53.218803    4389 out.go:201] 
	W0906 11:47:53.222774    4389 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 11:47:53.226761    4389 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 addons list
E0906 11:47:22.442656    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [337229e4-0d57-4b50-bcac-9715daaefc64] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0063755s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-152000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-152000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-152000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-152000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6a09db55-0de3-40bb-af41-3ed24c80b94e] Pending
helpers_test.go:344: "sp-pod" [6a09db55-0de3-40bb-af41-3ed24c80b94e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0906 11:47:27.418146    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [6a09db55-0de3-40bb-af41-3ed24c80b94e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.0089775s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-152000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-152000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-152000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4b229e3c-ea7c-4a0f-89ea-b3615b4d5d93] Pending
helpers_test.go:344: "sp-pod" [4b229e3c-ea7c-4a0f-89ea-b3615b4d5d93] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4b229e3c-ea7c-4a0f-89ea-b3615b4d5d93] Running
E0906 11:47:42.785624    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007331416s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-152000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh -n functional-152000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 cp functional-152000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2672843515/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh -n functional-152000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh -n functional-152000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2672/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "sudo cat /etc/test/nested/copy/2672/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2672.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "sudo cat /etc/ssl/certs/2672.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2672.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "sudo cat /usr/share/ca-certificates/2672.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/26722.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "sudo cat /etc/ssl/certs/26722.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/26722.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "sudo cat /usr/share/ca-certificates/26722.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-152000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-152000 ssh "sudo systemctl is-active crio": exit status 1 (64.402958ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-152000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-152000
docker.io/kicbase/echo-server:functional-152000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-152000 image ls --format short --alsologtostderr:
I0906 11:47:54.803155    4432 out.go:345] Setting OutFile to fd 1 ...
I0906 11:47:54.803312    4432 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:47:54.803316    4432 out.go:358] Setting ErrFile to fd 2...
I0906 11:47:54.803318    4432 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:47:54.803421    4432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
I0906 11:47:54.803861    4432 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:47:54.803921    4432 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:47:54.804696    4432 ssh_runner.go:195] Run: systemctl --version
I0906 11:47:54.804703    4432 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/functional-152000/id_rsa Username:docker}
I0906 11:47:54.834116    4432 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-152000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| docker.io/kicbase/echo-server               | functional-152000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-152000 | 3941d2542209e | 30B    |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-152000 image ls --format table --alsologtostderr:
I0906 11:47:55.035771    4438 out.go:345] Setting OutFile to fd 1 ...
I0906 11:47:55.035900    4438 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:47:55.035903    4438 out.go:358] Setting ErrFile to fd 2...
I0906 11:47:55.035906    4438 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:47:55.036030    4438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
I0906 11:47:55.036460    4438 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:47:55.036523    4438 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:47:55.037364    4438 ssh_runner.go:195] Run: systemctl --version
I0906 11:47:55.037375    4438 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/functional-152000/id_rsa Username:docker}
I0906 11:47:55.065548    4438 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-152000 image ls --format json --alsologtostderr:
[{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"3941d2542209e970fb1b3702c9087a1d2e12d1bd0ff84e7db7682a8ea0939cbb","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-152000"],"size":"30"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"ce2d2cda2d858fdaea84129deb8
6d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-152000"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["re
gistry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-152000 image ls --format json --alsologtostderr:
I0906 11:47:54.884390    4434 out.go:345] Setting OutFile to fd 1 ...
I0906 11:47:54.884560    4434 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:47:54.884567    4434 out.go:358] Setting ErrFile to fd 2...
I0906 11:47:54.884569    4434 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:47:54.884682    4434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
I0906 11:47:54.885108    4434 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:47:54.885172    4434 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:47:54.886093    4434 ssh_runner.go:195] Run: systemctl --version
I0906 11:47:54.886102    4434 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/functional-152000/id_rsa Username:docker}
I0906 11:47:54.914229    4434 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-152000 image ls --format yaml --alsologtostderr:
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-152000
size: "4780000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 3941d2542209e970fb1b3702c9087a1d2e12d1bd0ff84e7db7682a8ea0939cbb
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-152000
size: "30"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-152000 image ls --format yaml --alsologtostderr:
I0906 11:47:54.962375    4436 out.go:345] Setting OutFile to fd 1 ...
I0906 11:47:54.962571    4436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:47:54.962575    4436 out.go:358] Setting ErrFile to fd 2...
I0906 11:47:54.962577    4436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:47:54.962703    4436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
I0906 11:47:54.963138    4436 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:47:54.963203    4436 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:47:54.964016    4436 ssh_runner.go:195] Run: systemctl --version
I0906 11:47:54.964027    4436 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/functional-152000/id_rsa Username:docker}
I0906 11:47:54.994471    4436 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-152000 ssh pgrep buildkitd: exit status 1 (61.195583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image build -t localhost/my-image:functional-152000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-152000 image build -t localhost/my-image:functional-152000 testdata/build --alsologtostderr: (1.674237792s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-152000 image build -t localhost/my-image:functional-152000 testdata/build --alsologtostderr:
I0906 11:47:55.168723    4442 out.go:345] Setting OutFile to fd 1 ...
I0906 11:47:55.168961    4442 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:47:55.168964    4442 out.go:358] Setting ErrFile to fd 2...
I0906 11:47:55.168967    4442 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:47:55.169091    4442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-2143/.minikube/bin
I0906 11:47:55.169521    4442 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:47:55.170287    4442 config.go:182] Loaded profile config "functional-152000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:47:55.171146    4442 ssh_runner.go:195] Run: systemctl --version
I0906 11:47:55.171156    4442 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-2143/.minikube/machines/functional-152000/id_rsa Username:docker}
I0906 11:47:55.198825    4442 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3967689291.tar
I0906 11:47:55.198878    4442 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 11:47:55.202342    4442 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3967689291.tar
I0906 11:47:55.203912    4442 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3967689291.tar: stat -c "%s %y" /var/lib/minikube/build/build.3967689291.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3967689291.tar': No such file or directory
I0906 11:47:55.203929    4442 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3967689291.tar --> /var/lib/minikube/build/build.3967689291.tar (3072 bytes)
I0906 11:47:55.212532    4442 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3967689291
I0906 11:47:55.216226    4442 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3967689291 -xf /var/lib/minikube/build/build.3967689291.tar
I0906 11:47:55.219921    4442 docker.go:360] Building image: /var/lib/minikube/build/build.3967689291
I0906 11:47:55.219955    4442 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-152000 /var/lib/minikube/build/build.3967689291
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:6c4fdf3807b6086a14634a4a772bfcaab26e41b3f5952013859bdfd39f06ad46 done
#8 naming to localhost/my-image:functional-152000 done
#8 DONE 0.0s
I0906 11:47:56.799455    4442 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-152000 /var/lib/minikube/build/build.3967689291: (1.579506042s)
I0906 11:47:56.799535    4442 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3967689291
I0906 11:47:56.803363    4442 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3967689291.tar
I0906 11:47:56.806616    4442 build_images.go:217] Built localhost/my-image:functional-152000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3967689291.tar
I0906 11:47:56.806634    4442 build_images.go:133] succeeded building to: functional-152000
I0906 11:47:56.806636    4442 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image ls
2024/09/06 11:48:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.695930208s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-152000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-152000 docker-env) && out/minikube-darwin-arm64 status -p functional-152000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-152000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-152000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-152000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-w6kkx" [04de1b85-2273-4a34-a51b-b995aebd4714] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-w6kkx" [04de1b85-2273-4a34-a51b-b995aebd4714] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.009526167s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image load --daemon kicbase/echo-server:functional-152000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image load --daemon kicbase/echo-server:functional-152000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-152000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image load --daemon kicbase/echo-server:functional-152000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image save kicbase/echo-server:functional-152000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image rm kicbase/echo-server:functional-152000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-152000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 image save --daemon kicbase/echo-server:functional-152000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-152000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-152000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-152000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-152000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-152000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4247: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-152000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-152000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [72c98e07-0a75-40d8-b3ef-d7179ad0ee01] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [72c98e07-0a75-40d8-b3ef-d7179ad0ee01] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.009537333s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 service list -o json
functional_test.go:1494: Took "88.445666ms" to run "out/minikube-darwin-arm64 -p functional-152000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32744
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32744
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-152000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.61.91 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
E0906 11:47:22.276069    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
E0906 11:47:22.283720    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
E0906 11:47:22.296139    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-152000 tunnel --alsologtostderr] ...
E0906 11:47:22.317885    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:47:22.361152    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "88.891958ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.754291ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "87.054708ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "32.526416ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port371262820/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725648467342873000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port371262820/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725648467342873000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port371262820/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725648467342873000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port371262820/001/test-1725648467342873000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-152000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (61.209125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 18:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 18:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 18:47 test-1725648467342873000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh cat /mount-9p/test-1725648467342873000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-152000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [45982d61-fa7d-4690-9abf-5f364144eb3d] Pending
helpers_test.go:344: "busybox-mount" [45982d61-fa7d-4690-9abf-5f364144eb3d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [45982d61-fa7d-4690-9abf-5f364144eb3d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [45982d61-fa7d-4690-9abf-5f364144eb3d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.001983958s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-152000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port371262820/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1014594767/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-152000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (69.360417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1014594767/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-152000 ssh "sudo umount -f /mount-9p": exit status 1 (77.180833ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-152000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1014594767/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup52425764/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup52425764/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup52425764/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-152000 ssh "findmnt -T" /mount1: exit status 1 (85.526625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-152000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-152000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup52425764/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup52425764/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-152000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup52425764/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-152000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-152000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-152000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (177.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-001000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0906 11:48:03.267680    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:48:44.230653    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:50:06.152658    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-001000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m57.466346084s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (177.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-001000 -- rollout status deployment/busybox: (3.380611125s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-9nmfx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-brqs9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-mp6t6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-9nmfx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-brqs9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-mp6t6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-9nmfx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-brqs9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-mp6t6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-9nmfx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-9nmfx -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-brqs9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-brqs9 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-mp6t6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-001000 -- exec busybox-7dff88458-mp6t6 -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-001000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-001000 -v=7 --alsologtostderr: (54.462361417s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-001000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp testdata/cp-test.txt ha-001000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3369500102/001/cp-test_ha-001000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000:/home/docker/cp-test.txt ha-001000-m02:/home/docker/cp-test_ha-001000_ha-001000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m02 "sudo cat /home/docker/cp-test_ha-001000_ha-001000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000:/home/docker/cp-test.txt ha-001000-m03:/home/docker/cp-test_ha-001000_ha-001000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m03 "sudo cat /home/docker/cp-test_ha-001000_ha-001000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000:/home/docker/cp-test.txt ha-001000-m04:/home/docker/cp-test_ha-001000_ha-001000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m04 "sudo cat /home/docker/cp-test_ha-001000_ha-001000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp testdata/cp-test.txt ha-001000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3369500102/001/cp-test_ha-001000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m02:/home/docker/cp-test.txt ha-001000:/home/docker/cp-test_ha-001000-m02_ha-001000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000 "sudo cat /home/docker/cp-test_ha-001000-m02_ha-001000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m02:/home/docker/cp-test.txt ha-001000-m03:/home/docker/cp-test_ha-001000-m02_ha-001000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m03 "sudo cat /home/docker/cp-test_ha-001000-m02_ha-001000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m02:/home/docker/cp-test.txt ha-001000-m04:/home/docker/cp-test_ha-001000-m02_ha-001000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m04 "sudo cat /home/docker/cp-test_ha-001000-m02_ha-001000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp testdata/cp-test.txt ha-001000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3369500102/001/cp-test_ha-001000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m03:/home/docker/cp-test.txt ha-001000:/home/docker/cp-test_ha-001000-m03_ha-001000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000 "sudo cat /home/docker/cp-test_ha-001000-m03_ha-001000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m03:/home/docker/cp-test.txt ha-001000-m02:/home/docker/cp-test_ha-001000-m03_ha-001000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m02 "sudo cat /home/docker/cp-test_ha-001000-m03_ha-001000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m03:/home/docker/cp-test.txt ha-001000-m04:/home/docker/cp-test_ha-001000-m03_ha-001000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m04 "sudo cat /home/docker/cp-test_ha-001000-m03_ha-001000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp testdata/cp-test.txt ha-001000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3369500102/001/cp-test_ha-001000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m04:/home/docker/cp-test.txt ha-001000:/home/docker/cp-test_ha-001000-m04_ha-001000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000 "sudo cat /home/docker/cp-test_ha-001000-m04_ha-001000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m04:/home/docker/cp-test.txt ha-001000-m02:/home/docker/cp-test_ha-001000-m04_ha-001000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m02 "sudo cat /home/docker/cp-test_ha-001000-m04_ha-001000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 cp ha-001000-m04:/home/docker/cp-test.txt ha-001000-m03:/home/docker/cp-test_ha-001000-m04_ha-001000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-001000 ssh -n ha-001000-m03 "sudo cat /home/docker/cp-test_ha-001000-m04_ha-001000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0906 12:07:09.120912    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:07:22.253296    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:08:32.210178    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/functional-152000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.066470875s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.03s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-013000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-013000 --output=json --user=testUser: (3.033417291s)
--- PASS: TestJSONOutput/stop/Command (3.03s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-823000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-823000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.8935ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d367413-c161-4a68-a3af-b081e903dad7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-823000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cf16d037-f484-4384-893b-f854240d8f50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19576"}}
	{"specversion":"1.0","id":"53aa7420-aba4-4339-89a0-c35163f92963","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig"}}
	{"specversion":"1.0","id":"c7d8b596-719a-41c3-821f-5ad5a697b593","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e9798b8f-0945-418f-abc5-1c1245d09fe4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4a211b93-266b-461c-a2ca-67cebeafb4e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube"}}
	{"specversion":"1.0","id":"29d35dfc-9c15-4099-a4e9-abbb0f7ef28c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aec4b531-4baf-4103-85d7-dcf1a68b5569","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-823000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-823000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-236000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-889000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-889000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.925167ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-889000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-2143/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-2143/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-889000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-889000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.165125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-889000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-889000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.747336s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-889000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-889000: (3.060389291s)
--- PASS: TestNoKubernetes/serial/Stop (3.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-889000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-889000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.566ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-889000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-889000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-504000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-504000 --alsologtostderr -v=3: (3.618513125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-504000 -n old-k8s-version-504000: exit status 7 (58.3405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-504000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-052000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-052000 --alsologtostderr -v=3: (3.466181625s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (54.914792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-052000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-760000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-760000 --alsologtostderr -v=3: (3.296177833s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-760000 -n embed-certs-760000: exit status 7 (54.338833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-760000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0906 12:37:05.356895    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-760000 --alsologtostderr -v=3
E0906 12:37:22.265970    2672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-2143/.minikube/profiles/addons-439000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-760000 --alsologtostderr -v=3: (3.364054708s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-760000 -n default-k8s-diff-port-760000: exit status 7 (57.696042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-760000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-793000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-793000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-793000 --alsologtostderr -v=3: (1.96179325s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-793000 -n newest-cni-793000: exit status 7 (56.403833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-793000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/270)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-269000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-269000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-269000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-269000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269000"

                                                
                                                
----------------------- debugLogs end: cilium-269000 [took: 2.217756166s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-269000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-269000
--- SKIP: TestNetworkPlugins/group/cilium (2.33s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-755000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-755000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard