Test Report: QEMU_macOS 19667

                    
                      39f19baf3a7e1c810682dda0eb22abd909c6f2ab:2024-09-18:36273
                    
                

Test fail (98/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.24
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.32
33 TestAddons/parallel/Registry 71.27
46 TestCertOptions 12.2
47 TestCertExpiration 199.29
48 TestDockerFlags 10.1
49 TestForceSystemdFlag 10.3
50 TestForceSystemdEnv 9.96
95 TestFunctional/parallel/ServiceCmdConnect 36.14
167 TestMultiControlPlane/serial/StopSecondaryNode 64.12
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 51.93
169 TestMultiControlPlane/serial/RestartSecondaryNode 82.97
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.39
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 300.23
175 TestMultiControlPlane/serial/RestartCluster 5.24
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.16
184 TestJSONOutput/start/Command 9.85
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.27
216 TestMountStart/serial/StartWithMountFirst 9.99
219 TestMultiNode/serial/FreshStart2Nodes 9.92
220 TestMultiNode/serial/DeployApp2Nodes 71.69
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 46.91
228 TestMultiNode/serial/RestartKeepsNodes 8.2
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 2.09
231 TestMultiNode/serial/RestartMultiNode 5.25
232 TestMultiNode/serial/ValidateNameConflict 20.31
236 TestPreload 10.01
238 TestScheduledStopUnix 9.93
239 TestSkaffold 12.36
242 TestRunningBinaryUpgrade 610.92
244 TestKubernetesUpgrade 21.98
248 TestNoKubernetes/serial/StartWithK8s 12.63
249 TestNoKubernetes/serial/StartWithStopK8s 7.59
250 TestNoKubernetes/serial/Start 7.62
254 TestNoKubernetes/serial/StartNoArgs 5.35
256 TestStoppedBinaryUpgrade/Upgrade 606.99
267 TestPause/serial/Start 10.17
279 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.89
280 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.73
282 TestStartStop/group/old-k8s-version/serial/FirstStart 10.03
283 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
284 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
287 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
288 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
289 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
290 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
291 TestStartStop/group/old-k8s-version/serial/Pause 0.1
293 TestStartStop/group/no-preload/serial/FirstStart 10.06
294 TestStartStop/group/no-preload/serial/DeployApp 0.09
295 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
298 TestStartStop/group/no-preload/serial/SecondStart 5.26
299 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
300 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
301 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
302 TestStartStop/group/no-preload/serial/Pause 0.1
304 TestStartStop/group/embed-certs/serial/FirstStart 9.99
305 TestStartStop/group/embed-certs/serial/DeployApp 0.09
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
309 TestStartStop/group/embed-certs/serial/SecondStart 5.27
310 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
311 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
313 TestStartStop/group/embed-certs/serial/Pause 0.1
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.93
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
320 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
322 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
323 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
324 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
326 TestStartStop/group/newest-cni/serial/FirstStart 10.04
331 TestStartStop/group/newest-cni/serial/SecondStart 5.26
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
335 TestStartStop/group/newest-cni/serial/Pause 0.1
336 TestNetworkPlugins/group/auto/Start 9.97
337 TestNetworkPlugins/group/calico/Start 9.84
338 TestNetworkPlugins/group/custom-flannel/Start 9.93
339 TestNetworkPlugins/group/false/Start 10.16
340 TestNetworkPlugins/group/kindnet/Start 9.85
341 TestNetworkPlugins/group/flannel/Start 10.07
342 TestNetworkPlugins/group/enable-default-cni/Start 10.08
343 TestNetworkPlugins/group/bridge/Start 12.15
344 TestNetworkPlugins/group/kubenet/Start 10.05
x
+
TestDownloadOnly/v1.20.0/json-events (13.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-576000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-576000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.239528333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9abaca62-7248-499f-96b0-fca15c088431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-576000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"987192ec-cb6a-4a59-b382-2c32487512fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"e2e6f52c-01fb-4284-b364-5449efb36719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig"}}
	{"specversion":"1.0","id":"19d1a3cc-5987-416e-9825-8716a8b24dae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3468665a-6aef-4332-b3b9-c52f1afa9903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c361032a-47e5-4dd9-9230-8dcf2cd8106c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube"}}
	{"specversion":"1.0","id":"edbc4467-a3d9-4f87-8f17-e7fe91a43656","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"11e99ce9-e22e-433e-a68f-efc27140cfaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6847963e-5c89-4d75-9e59-7da525b2ffc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f7c3c8d3-f0fd-4c41-9329-e3bfbed65326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b32a835-4f4e-4fc6-830d-a91e8f3fe631","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-576000\" primary control-plane node in \"download-only-576000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2a3619d-b881-438f-b955-2b93631183ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bcfcdc3f-26e6-4927-b167-710b517d27d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106759780 0x106759780 0x106759780 0x106759780 0x106759780 0x106759780 0x106759780] Decompressors:map[bz2:0x1400074f490 gz:0x1400074f498 tar:0x1400074f440 tar.bz2:0x1400074f450 tar.gz:0x1400074f460 tar.xz:0x1400074f470 tar.zst:0x1400074f480 tbz2:0x1400074f450 tgz:0x14
00074f460 txz:0x1400074f470 tzst:0x1400074f480 xz:0x1400074f4a0 zip:0x1400074f4b0 zst:0x1400074f4a8] Getters:map[file:0x1400057c9f0 http:0x14000734410 https:0x14000734460] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"effdb96e-ab44-4ad1-9e00-f0baa452a790","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 12:37:09.072665    1517 out.go:345] Setting OutFile to fd 1 ...
	I0918 12:37:09.073065    1517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:09.073069    1517 out.go:358] Setting ErrFile to fd 2...
	I0918 12:37:09.073072    1517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:09.073271    1517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	W0918 12:37:09.073377    1517 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19667-1040/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19667-1040/.minikube/config/config.json: no such file or directory
	I0918 12:37:09.074900    1517 out.go:352] Setting JSON to true
	I0918 12:37:09.093374    1517 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":388,"bootTime":1726687841,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 12:37:09.093501    1517 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 12:37:09.099601    1517 out.go:97] [download-only-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 12:37:09.099766    1517 notify.go:220] Checking for updates...
	W0918 12:37:09.099820    1517 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 12:37:09.101441    1517 out.go:169] MINIKUBE_LOCATION=19667
	I0918 12:37:09.104603    1517 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 12:37:09.108752    1517 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:37:09.111588    1517 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:37:09.114545    1517 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	W0918 12:37:09.120533    1517 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 12:37:09.120737    1517 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 12:37:09.124530    1517 out.go:97] Using the qemu2 driver based on user configuration
	I0918 12:37:09.124546    1517 start.go:297] selected driver: qemu2
	I0918 12:37:09.124559    1517 start.go:901] validating driver "qemu2" against <nil>
	I0918 12:37:09.124629    1517 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 12:37:09.127622    1517 out.go:169] Automatically selected the socket_vmnet network
	I0918 12:37:09.132261    1517 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0918 12:37:09.132368    1517 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 12:37:09.132393    1517 cni.go:84] Creating CNI manager for ""
	I0918 12:37:09.132427    1517 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 12:37:09.132492    1517 start.go:340] cluster config:
	{Name:download-only-576000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 12:37:09.138078    1517 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:09.142452    1517 out.go:97] Downloading VM boot image ...
	I0918 12:37:09.142467    1517 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0918 12:37:15.036513    1517 out.go:97] Starting "download-only-576000" primary control-plane node in "download-only-576000" cluster
	I0918 12:37:15.036532    1517 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 12:37:15.095946    1517 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0918 12:37:15.095954    1517 cache.go:56] Caching tarball of preloaded images
	I0918 12:37:15.096115    1517 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 12:37:15.101701    1517 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0918 12:37:15.101707    1517 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 12:37:15.198702    1517 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0918 12:37:20.968638    1517 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 12:37:20.968788    1517 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 12:37:21.664196    1517 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0918 12:37:21.664384    1517 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/download-only-576000/config.json ...
	I0918 12:37:21.664401    1517 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/download-only-576000/config.json: {Name:mk44c4b52f07432554c5b53c20e72a7d2815a96c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:37:21.664637    1517 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 12:37:21.664849    1517 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0918 12:37:22.236091    1517 out.go:193] 
	W0918 12:37:22.241236    1517 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106759780 0x106759780 0x106759780 0x106759780 0x106759780 0x106759780 0x106759780] Decompressors:map[bz2:0x1400074f490 gz:0x1400074f498 tar:0x1400074f440 tar.bz2:0x1400074f450 tar.gz:0x1400074f460 tar.xz:0x1400074f470 tar.zst:0x1400074f480 tbz2:0x1400074f450 tgz:0x1400074f460 txz:0x1400074f470 tzst:0x1400074f480 xz:0x1400074f4a0 zip:0x1400074f4b0 zst:0x1400074f4a8] Getters:map[file:0x1400057c9f0 http:0x14000734410 https:0x14000734460] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0918 12:37:22.241262    1517 out_reason.go:110] 
	W0918 12:37:22.250062    1517 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:37:22.254124    1517 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-576000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.32s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-716000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-716000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.141103583s)

                                                
                                                
-- stdout --
	* [offline-docker-716000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-716000" primary control-plane node in "offline-docker-716000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-716000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:20:51.457898    3742 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:20:51.458029    3742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:20:51.458032    3742 out.go:358] Setting ErrFile to fd 2...
	I0918 13:20:51.458035    3742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:20:51.458162    3742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:20:51.459339    3742 out.go:352] Setting JSON to false
	I0918 13:20:51.479856    3742 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3010,"bootTime":1726687841,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:20:51.479935    3742 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:20:51.495606    3742 out.go:177] * [offline-docker-716000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:20:51.506740    3742 notify.go:220] Checking for updates...
	I0918 13:20:51.509716    3742 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:20:51.515608    3742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:20:51.518620    3742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:20:51.521568    3742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:20:51.527612    3742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:20:51.529172    3742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:20:51.532987    3742 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:20:51.533050    3742 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:20:51.545625    3742 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:20:51.553641    3742 start.go:297] selected driver: qemu2
	I0918 13:20:51.553648    3742 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:20:51.553655    3742 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:20:51.555969    3742 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:20:51.564582    3742 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:20:51.572701    3742 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:20:51.572724    3742 cni.go:84] Creating CNI manager for ""
	I0918 13:20:51.572750    3742 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:20:51.572755    3742 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:20:51.572802    3742 start.go:340] cluster config:
	{Name:offline-docker-716000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:20:51.577068    3742 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:20:51.580593    3742 out.go:177] * Starting "offline-docker-716000" primary control-plane node in "offline-docker-716000" cluster
	I0918 13:20:51.588561    3742 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:20:51.588598    3742 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:20:51.588606    3742 cache.go:56] Caching tarball of preloaded images
	I0918 13:20:51.588693    3742 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:20:51.588699    3742 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:20:51.588767    3742 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/offline-docker-716000/config.json ...
	I0918 13:20:51.588778    3742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/offline-docker-716000/config.json: {Name:mkb97d62d57e4c2b593168a343f708251edd7611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:20:51.589092    3742 start.go:360] acquireMachinesLock for offline-docker-716000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:20:51.589130    3742 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "offline-docker-716000"
	I0918 13:20:51.589140    3742 start.go:93] Provisioning new machine with config: &{Name:offline-docker-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:20:51.589168    3742 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:20:51.592719    3742 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 13:20:51.609705    3742 start.go:159] libmachine.API.Create for "offline-docker-716000" (driver="qemu2")
	I0918 13:20:51.609737    3742 client.go:168] LocalClient.Create starting
	I0918 13:20:51.609812    3742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:20:51.609841    3742 main.go:141] libmachine: Decoding PEM data...
	I0918 13:20:51.609850    3742 main.go:141] libmachine: Parsing certificate...
	I0918 13:20:51.609892    3742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:20:51.609915    3742 main.go:141] libmachine: Decoding PEM data...
	I0918 13:20:51.609925    3742 main.go:141] libmachine: Parsing certificate...
	I0918 13:20:51.610298    3742 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:20:51.865566    3742 main.go:141] libmachine: Creating SSH key...
	I0918 13:20:52.066464    3742 main.go:141] libmachine: Creating Disk image...
	I0918 13:20:52.066472    3742 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:20:52.066680    3742 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2
	I0918 13:20:52.076540    3742 main.go:141] libmachine: STDOUT: 
	I0918 13:20:52.076559    3742 main.go:141] libmachine: STDERR: 
	I0918 13:20:52.076623    3742 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2 +20000M
	I0918 13:20:52.084634    3742 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:20:52.084648    3742 main.go:141] libmachine: STDERR: 
	I0918 13:20:52.084664    3742 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2
	I0918 13:20:52.084668    3742 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:20:52.084680    3742 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:20:52.084712    3742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:12:76:cd:e1:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2
	I0918 13:20:52.086327    3742 main.go:141] libmachine: STDOUT: 
	I0918 13:20:52.086341    3742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:20:52.086364    3742 client.go:171] duration metric: took 476.630208ms to LocalClient.Create
	I0918 13:20:54.088545    3742 start.go:128] duration metric: took 2.499419791s to createHost
	I0918 13:20:54.088596    3742 start.go:83] releasing machines lock for "offline-docker-716000", held for 2.499522s
	W0918 13:20:54.088647    3742 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:20:54.103839    3742 out.go:177] * Deleting "offline-docker-716000" in qemu2 ...
	W0918 13:20:54.136352    3742 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:20:54.136385    3742 start.go:729] Will try again in 5 seconds ...
	I0918 13:20:59.138484    3742 start.go:360] acquireMachinesLock for offline-docker-716000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:20:59.138930    3742 start.go:364] duration metric: took 373.291µs to acquireMachinesLock for "offline-docker-716000"
	I0918 13:20:59.139078    3742 start.go:93] Provisioning new machine with config: &{Name:offline-docker-716000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-716000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:20:59.139379    3742 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:20:59.149913    3742 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 13:20:59.201440    3742 start.go:159] libmachine.API.Create for "offline-docker-716000" (driver="qemu2")
	I0918 13:20:59.201615    3742 client.go:168] LocalClient.Create starting
	I0918 13:20:59.201777    3742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:20:59.201851    3742 main.go:141] libmachine: Decoding PEM data...
	I0918 13:20:59.201866    3742 main.go:141] libmachine: Parsing certificate...
	I0918 13:20:59.201955    3742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:20:59.202003    3742 main.go:141] libmachine: Decoding PEM data...
	I0918 13:20:59.202016    3742 main.go:141] libmachine: Parsing certificate...
	I0918 13:20:59.202558    3742 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:20:59.397454    3742 main.go:141] libmachine: Creating SSH key...
	I0918 13:20:59.502081    3742 main.go:141] libmachine: Creating Disk image...
	I0918 13:20:59.502087    3742 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:20:59.502267    3742 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2
	I0918 13:20:59.511693    3742 main.go:141] libmachine: STDOUT: 
	I0918 13:20:59.511712    3742 main.go:141] libmachine: STDERR: 
	I0918 13:20:59.511759    3742 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2 +20000M
	I0918 13:20:59.519688    3742 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:20:59.519704    3742 main.go:141] libmachine: STDERR: 
	I0918 13:20:59.519723    3742 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2
	I0918 13:20:59.519727    3742 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:20:59.519739    3742 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:20:59.519763    3742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:d9:83:25:a7:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/offline-docker-716000/disk.qcow2
	I0918 13:20:59.521308    3742 main.go:141] libmachine: STDOUT: 
	I0918 13:20:59.521322    3742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:20:59.521335    3742 client.go:171] duration metric: took 319.70525ms to LocalClient.Create
	I0918 13:21:01.523557    3742 start.go:128] duration metric: took 2.384197958s to createHost
	I0918 13:21:01.523620    3742 start.go:83] releasing machines lock for "offline-docker-716000", held for 2.384727125s
	W0918 13:21:01.523894    3742 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-716000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:21:01.532430    3742 out.go:201] 
	W0918 13:21:01.538692    3742 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:21:01.538754    3742 out.go:270] * 
	* 
	W0918 13:21:01.541499    3742 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:21:01.550406    3742 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-716000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-18 13:21:01.567775 -0700 PDT m=+2632.654002085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-716000 -n offline-docker-716000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-716000 -n offline-docker-716000: exit status 7 (69.628166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-716000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-716000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-716000
--- FAIL: TestOffline (10.32s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.194333ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0918 12:49:35.123054    1516 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0918 12:49:35.123061    1516 kapi.go:107] duration metric: took 2.753417ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "registry-66c9cd494c-zqh2d" [2cd0b0e2-c98a-477c-9973-6e010d122199] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007803208s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pgsgm" [733c8b1c-39a5-4634-92d4-ac15f0f79484] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005526834s
addons_test.go:342: (dbg) Run:  kubectl --context addons-476000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-476000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-476000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.054516291s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-476000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 ip
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-476000 -n addons-476000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-576000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT |                     |
	|         | -p download-only-576000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT | 18 Sep 24 12:37 PDT |
	| delete  | -p download-only-576000              | download-only-576000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT | 18 Sep 24 12:37 PDT |
	| start   | -o=json --download-only              | download-only-832000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT |                     |
	|         | -p download-only-832000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT | 18 Sep 24 12:37 PDT |
	| delete  | -p download-only-832000              | download-only-832000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT | 18 Sep 24 12:37 PDT |
	| delete  | -p download-only-576000              | download-only-576000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT | 18 Sep 24 12:37 PDT |
	| delete  | -p download-only-832000              | download-only-832000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT | 18 Sep 24 12:37 PDT |
	| start   | --download-only -p                   | binary-mirror-256000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT |                     |
	|         | binary-mirror-256000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-256000              | binary-mirror-256000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT | 18 Sep 24 12:37 PDT |
	| addons  | disable dashboard -p                 | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT |                     |
	|         | addons-476000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT |                     |
	|         | addons-476000                        |                      |         |         |                     |                     |
	| start   | -p addons-476000 --wait=true         | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT | 18 Sep 24 12:40 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-476000 addons disable         | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:41 PDT | 18 Sep 24 12:41 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-476000 addons                 | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:50 PDT | 18 Sep 24 12:50 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-476000 addons                 | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:50 PDT | 18 Sep 24 12:50 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-476000 addons                 | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:50 PDT | 18 Sep 24 12:50 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:50 PDT | 18 Sep 24 12:50 PDT |
	|         | addons-476000                        |                      |         |         |                     |                     |
	| ssh     | addons-476000 ssh curl -s            | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:50 PDT | 18 Sep 24 12:50 PDT |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| ip      | addons-476000 ip                     | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:50 PDT | 18 Sep 24 12:50 PDT |
	| addons  | addons-476000 addons disable         | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:50 PDT | 18 Sep 24 12:50 PDT |
	|         | ingress-dns --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-476000 addons disable         | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:50 PDT | 18 Sep 24 12:50 PDT |
	|         | ingress --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| ip      | addons-476000 ip                     | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:50 PDT | 18 Sep 24 12:50 PDT |
	| addons  | addons-476000 addons disable         | addons-476000        | jenkins | v1.34.0 | 18 Sep 24 12:50 PDT | 18 Sep 24 12:50 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 12:37:31
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 12:37:31.206148    1595 out.go:345] Setting OutFile to fd 1 ...
	I0918 12:37:31.206274    1595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:31.206277    1595 out.go:358] Setting ErrFile to fd 2...
	I0918 12:37:31.206280    1595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:31.206406    1595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 12:37:31.207519    1595 out.go:352] Setting JSON to false
	I0918 12:37:31.224399    1595 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":410,"bootTime":1726687841,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 12:37:31.224465    1595 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 12:37:31.227797    1595 out.go:177] * [addons-476000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 12:37:31.234795    1595 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 12:37:31.234859    1595 notify.go:220] Checking for updates...
	I0918 12:37:31.241685    1595 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 12:37:31.244738    1595 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:37:31.247690    1595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:37:31.250729    1595 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 12:37:31.253750    1595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 12:37:31.256837    1595 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 12:37:31.260733    1595 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 12:37:31.266697    1595 start.go:297] selected driver: qemu2
	I0918 12:37:31.266705    1595 start.go:901] validating driver "qemu2" against <nil>
	I0918 12:37:31.266711    1595 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 12:37:31.268976    1595 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 12:37:31.271693    1595 out.go:177] * Automatically selected the socket_vmnet network
	I0918 12:37:31.274800    1595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:37:31.274816    1595 cni.go:84] Creating CNI manager for ""
	I0918 12:37:31.274846    1595 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:37:31.274850    1595 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:37:31.274887    1595 start.go:340] cluster config:
	{Name:addons-476000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 12:37:31.278621    1595 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:31.286728    1595 out.go:177] * Starting "addons-476000" primary control-plane node in "addons-476000" cluster
	I0918 12:37:31.290727    1595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 12:37:31.290744    1595 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 12:37:31.290751    1595 cache.go:56] Caching tarball of preloaded images
	I0918 12:37:31.290814    1595 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 12:37:31.290820    1595 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 12:37:31.291026    1595 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/config.json ...
	I0918 12:37:31.291038    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/config.json: {Name:mk84ba295b30b9f46d65b3f0e8f0a5fed4c9a6f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:37:31.291464    1595 start.go:360] acquireMachinesLock for addons-476000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 12:37:31.291545    1595 start.go:364] duration metric: took 74.625µs to acquireMachinesLock for "addons-476000"
	I0918 12:37:31.291556    1595 start.go:93] Provisioning new machine with config: &{Name:addons-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:37:31.291587    1595 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 12:37:31.300768    1595 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0918 12:37:32.327230    1595 start.go:159] libmachine.API.Create for "addons-476000" (driver="qemu2")
	I0918 12:37:32.327333    1595 client.go:168] LocalClient.Create starting
	I0918 12:37:32.327633    1595 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 12:37:32.460334    1595 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 12:37:32.552804    1595 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 12:37:33.359593    1595 main.go:141] libmachine: Creating SSH key...
	I0918 12:37:33.426668    1595 main.go:141] libmachine: Creating Disk image...
	I0918 12:37:33.426674    1595 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 12:37:33.427635    1595 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/disk.qcow2
	I0918 12:37:33.512004    1595 main.go:141] libmachine: STDOUT: 
	I0918 12:37:33.512033    1595 main.go:141] libmachine: STDERR: 
	I0918 12:37:33.512127    1595 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/disk.qcow2 +20000M
	I0918 12:37:33.522138    1595 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 12:37:33.522155    1595 main.go:141] libmachine: STDERR: 
	I0918 12:37:33.522171    1595 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/disk.qcow2
	I0918 12:37:33.522177    1595 main.go:141] libmachine: Starting QEMU VM...
	I0918 12:37:33.522227    1595 qemu.go:418] Using hvf for hardware acceleration
	I0918 12:37:33.522307    1595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:b2:ee:98:c1:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/disk.qcow2
	I0918 12:37:33.577454    1595 main.go:141] libmachine: STDOUT: 
	I0918 12:37:33.577493    1595 main.go:141] libmachine: STDERR: 
	I0918 12:37:33.577497    1595 main.go:141] libmachine: Attempt 0
	I0918 12:37:33.577508    1595 main.go:141] libmachine: Searching for b2:b2:ee:98:c1:8a in /var/db/dhcpd_leases ...
	I0918 12:37:33.577589    1595 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0918 12:37:33.577610    1595 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66ec7bee}
	I0918 12:37:35.579778    1595 main.go:141] libmachine: Attempt 1
	I0918 12:37:35.579859    1595 main.go:141] libmachine: Searching for b2:b2:ee:98:c1:8a in /var/db/dhcpd_leases ...
	I0918 12:37:35.580095    1595 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0918 12:37:35.580146    1595 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66ec7bee}
	I0918 12:37:37.580874    1595 main.go:141] libmachine: Attempt 2
	I0918 12:37:37.580986    1595 main.go:141] libmachine: Searching for b2:b2:ee:98:c1:8a in /var/db/dhcpd_leases ...
	I0918 12:37:37.581275    1595 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0918 12:37:37.581326    1595 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66ec7bee}
	I0918 12:37:39.583474    1595 main.go:141] libmachine: Attempt 3
	I0918 12:37:39.583511    1595 main.go:141] libmachine: Searching for b2:b2:ee:98:c1:8a in /var/db/dhcpd_leases ...
	I0918 12:37:39.583575    1595 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0918 12:37:39.583589    1595 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66ec7bee}
	I0918 12:37:41.585570    1595 main.go:141] libmachine: Attempt 4
	I0918 12:37:41.585579    1595 main.go:141] libmachine: Searching for b2:b2:ee:98:c1:8a in /var/db/dhcpd_leases ...
	I0918 12:37:41.585621    1595 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0918 12:37:41.585646    1595 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66ec7bee}
	I0918 12:37:43.586200    1595 main.go:141] libmachine: Attempt 5
	I0918 12:37:43.586214    1595 main.go:141] libmachine: Searching for b2:b2:ee:98:c1:8a in /var/db/dhcpd_leases ...
	I0918 12:37:43.586287    1595 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0918 12:37:43.586297    1595 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66ec7bee}
	I0918 12:37:45.587659    1595 main.go:141] libmachine: Attempt 6
	I0918 12:37:45.587683    1595 main.go:141] libmachine: Searching for b2:b2:ee:98:c1:8a in /var/db/dhcpd_leases ...
	I0918 12:37:45.587730    1595 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0918 12:37:45.587741    1595 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66ec7bee}
	I0918 12:37:47.589866    1595 main.go:141] libmachine: Attempt 7
	I0918 12:37:47.590019    1595 main.go:141] libmachine: Searching for b2:b2:ee:98:c1:8a in /var/db/dhcpd_leases ...
	I0918 12:37:47.590332    1595 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0918 12:37:47.590383    1595 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b2:b2:ee:98:c1:8a ID:1,b2:b2:ee:98:c1:8a Lease:0x66ec7d8a}
	I0918 12:37:47.590399    1595 main.go:141] libmachine: Found match: b2:b2:ee:98:c1:8a
	I0918 12:37:47.590436    1595 main.go:141] libmachine: IP: 192.168.105.2
	I0918 12:37:47.590457    1595 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0918 12:37:50.604159    1595 machine.go:93] provisionDockerMachine start ...
	I0918 12:37:50.605395    1595 main.go:141] libmachine: Using SSH client type: native
	I0918 12:37:50.605610    1595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007c9190] 0x1007cb9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 12:37:50.605619    1595 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 12:37:50.667884    1595 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 12:37:50.667901    1595 buildroot.go:166] provisioning hostname "addons-476000"
	I0918 12:37:50.667981    1595 main.go:141] libmachine: Using SSH client type: native
	I0918 12:37:50.668138    1595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007c9190] 0x1007cb9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 12:37:50.668146    1595 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-476000 && echo "addons-476000" | sudo tee /etc/hostname
	I0918 12:37:50.732788    1595 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-476000
	
	I0918 12:37:50.732846    1595 main.go:141] libmachine: Using SSH client type: native
	I0918 12:37:50.732991    1595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007c9190] 0x1007cb9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 12:37:50.733001    1595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-476000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-476000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-476000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 12:37:50.793000    1595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 12:37:50.793016    1595 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19667-1040/.minikube CaCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19667-1040/.minikube}
	I0918 12:37:50.793027    1595 buildroot.go:174] setting up certificates
	I0918 12:37:50.793032    1595 provision.go:84] configureAuth start
	I0918 12:37:50.793042    1595 provision.go:143] copyHostCerts
	I0918 12:37:50.793154    1595 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.pem (1082 bytes)
	I0918 12:37:50.793405    1595 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/cert.pem (1123 bytes)
	I0918 12:37:50.793552    1595 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/key.pem (1679 bytes)
	I0918 12:37:50.793659    1595 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem org=jenkins.addons-476000 san=[127.0.0.1 192.168.105.2 addons-476000 localhost minikube]
	I0918 12:37:50.909226    1595 provision.go:177] copyRemoteCerts
	I0918 12:37:50.909465    1595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 12:37:50.909479    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:37:50.939981    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 12:37:50.948520    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 12:37:50.956842    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 12:37:50.965586    1595 provision.go:87] duration metric: took 172.5385ms to configureAuth
	I0918 12:37:50.965597    1595 buildroot.go:189] setting minikube options for container-runtime
	I0918 12:37:50.965712    1595 config.go:182] Loaded profile config "addons-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 12:37:50.965753    1595 main.go:141] libmachine: Using SSH client type: native
	I0918 12:37:50.965840    1595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007c9190] 0x1007cb9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 12:37:50.965846    1595 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 12:37:51.019943    1595 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0918 12:37:51.019952    1595 buildroot.go:70] root file system type: tmpfs
	I0918 12:37:51.020002    1595 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 12:37:51.020050    1595 main.go:141] libmachine: Using SSH client type: native
	I0918 12:37:51.020150    1595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007c9190] 0x1007cb9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 12:37:51.020183    1595 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 12:37:51.076367    1595 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 12:37:51.076414    1595 main.go:141] libmachine: Using SSH client type: native
	I0918 12:37:51.076510    1595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007c9190] 0x1007cb9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 12:37:51.076519    1595 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 12:37:52.462856    1595 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0918 12:37:52.462869    1595 machine.go:96] duration metric: took 1.85875325s to provisionDockerMachine
	I0918 12:37:52.462875    1595 client.go:171] duration metric: took 20.136158s to LocalClient.Create
	I0918 12:37:52.462885    1595 start.go:167] duration metric: took 20.136293667s to libmachine.API.Create "addons-476000"
	I0918 12:37:52.462889    1595 start.go:293] postStartSetup for "addons-476000" (driver="qemu2")
	I0918 12:37:52.462895    1595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 12:37:52.462966    1595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 12:37:52.462975    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:37:52.492981    1595 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 12:37:52.494497    1595 info.go:137] Remote host: Buildroot 2023.02.9
	I0918 12:37:52.494506    1595 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19667-1040/.minikube/addons for local assets ...
	I0918 12:37:52.494586    1595 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19667-1040/.minikube/files for local assets ...
	I0918 12:37:52.494622    1595 start.go:296] duration metric: took 31.731042ms for postStartSetup
	I0918 12:37:52.495066    1595 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/config.json ...
	I0918 12:37:52.495420    1595 start.go:128] duration metric: took 21.204481875s to createHost
	I0918 12:37:52.495450    1595 main.go:141] libmachine: Using SSH client type: native
	I0918 12:37:52.495545    1595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007c9190] 0x1007cb9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0918 12:37:52.495550    1595 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 12:37:52.546952    1595 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726688273.034391128
	
	I0918 12:37:52.546961    1595 fix.go:216] guest clock: 1726688273.034391128
	I0918 12:37:52.546966    1595 fix.go:229] Guest: 2024-09-18 12:37:53.034391128 -0700 PDT Remote: 2024-09-18 12:37:52.495422 -0700 PDT m=+21.308508626 (delta=538.969128ms)
	I0918 12:37:52.546979    1595 fix.go:200] guest clock delta is within tolerance: 538.969128ms
	I0918 12:37:52.546982    1595 start.go:83] releasing machines lock for "addons-476000", held for 21.256091541s
	I0918 12:37:52.547297    1595 ssh_runner.go:195] Run: cat /version.json
	I0918 12:37:52.547307    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:37:52.547523    1595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 12:37:52.547551    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:37:52.576560    1595 ssh_runner.go:195] Run: systemctl --version
	I0918 12:37:52.622420    1595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 12:37:52.624451    1595 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 12:37:52.624488    1595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 12:37:52.630785    1595 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 12:37:52.630792    1595 start.go:495] detecting cgroup driver to use...
	I0918 12:37:52.630913    1595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 12:37:52.637415    1595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0918 12:37:52.641023    1595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 12:37:52.644686    1595 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 12:37:52.644717    1595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 12:37:52.648401    1595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 12:37:52.652092    1595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 12:37:52.656094    1595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 12:37:52.659883    1595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 12:37:52.663762    1595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 12:37:52.667519    1595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0918 12:37:52.671591    1595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0918 12:37:52.675442    1595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 12:37:52.679011    1595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 12:37:52.682303    1595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:37:52.759516    1595 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 12:37:52.770169    1595 start.go:495] detecting cgroup driver to use...
	I0918 12:37:52.770238    1595 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 12:37:52.776102    1595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 12:37:52.781499    1595 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 12:37:52.792191    1595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 12:37:52.797793    1595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 12:37:52.803046    1595 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 12:37:52.843956    1595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 12:37:52.850172    1595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 12:37:52.857198    1595 ssh_runner.go:195] Run: which cri-dockerd
	I0918 12:37:52.858648    1595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 12:37:52.861973    1595 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0918 12:37:52.868287    1595 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 12:37:52.942655    1595 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 12:37:53.026209    1595 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 12:37:53.026268    1595 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0918 12:37:53.032167    1595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:37:53.115015    1595 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 12:37:55.308452    1595 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.19348625s)
	I0918 12:37:55.308520    1595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0918 12:37:55.314461    1595 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0918 12:37:55.321007    1595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 12:37:55.326440    1595 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 12:37:55.418719    1595 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 12:37:55.492556    1595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:37:55.578640    1595 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 12:37:55.585312    1595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 12:37:55.591617    1595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:37:55.680749    1595 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0918 12:37:55.706616    1595 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 12:37:55.706897    1595 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 12:37:55.709227    1595 start.go:563] Will wait 60s for crictl version
	I0918 12:37:55.709266    1595 ssh_runner.go:195] Run: which crictl
	I0918 12:37:55.710596    1595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 12:37:55.726384    1595 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0918 12:37:55.726467    1595 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 12:37:55.738403    1595 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 12:37:55.755012    1595 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0918 12:37:55.755327    1595 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0918 12:37:55.756925    1595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 12:37:55.761076    1595 kubeadm.go:883] updating cluster {Name:addons-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 12:37:55.761123    1595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 12:37:55.761176    1595 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 12:37:55.766465    1595 docker.go:685] Got preloaded images: 
	I0918 12:37:55.766473    1595 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0918 12:37:55.766520    1595 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 12:37:55.769973    1595 ssh_runner.go:195] Run: which lz4
	I0918 12:37:55.771454    1595 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 12:37:55.773102    1595 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 12:37:55.773112    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0918 12:37:57.028110    1595 docker.go:649] duration metric: took 1.256746917s to copy over tarball
	I0918 12:37:57.028171    1595 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 12:37:57.980988    1595 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 12:37:57.995811    1595 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 12:37:57.999810    1595 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0918 12:37:58.005925    1595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:37:58.087822    1595 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 12:38:01.089927    1595 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.002180416s)
	I0918 12:38:01.090044    1595 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 12:38:01.095503    1595 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 12:38:01.095515    1595 cache_images.go:84] Images are preloaded, skipping loading
	I0918 12:38:01.095536    1595 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0918 12:38:01.095606    1595 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-476000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 12:38:01.095671    1595 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 12:38:01.116918    1595 cni.go:84] Creating CNI manager for ""
	I0918 12:38:01.116933    1595 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:38:01.116950    1595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 12:38:01.116965    1595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-476000 NodeName:addons-476000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 12:38:01.117028    1595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-476000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 12:38:01.117087    1595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 12:38:01.120896    1595 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 12:38:01.120933    1595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 12:38:01.124378    1595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0918 12:38:01.130954    1595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 12:38:01.136515    1595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0918 12:38:01.142555    1595 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0918 12:38:01.143959    1595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 12:38:01.148236    1595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:38:01.230437    1595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 12:38:01.237968    1595 certs.go:68] Setting up /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000 for IP: 192.168.105.2
	I0918 12:38:01.237978    1595 certs.go:194] generating shared ca certs ...
	I0918 12:38:01.237986    1595 certs.go:226] acquiring lock for ca certs: {Name:mk6bf733e3b7a8269fa0cc74c7cf113ceab149df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:01.238154    1595 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key
	I0918 12:38:01.368186    1595 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt ...
	I0918 12:38:01.368196    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt: {Name:mkde06ad92f53e6f1ba7edc9fc4d10e1bb409df5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:01.368488    1595 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key ...
	I0918 12:38:01.368499    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key: {Name:mka45c6025c0fb12576ef5c4ff7092c836d8fd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:01.368709    1595 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key
	I0918 12:38:01.427099    1595 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.crt ...
	I0918 12:38:01.427114    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.crt: {Name:mkd95ab8a7db1d07fb2a231dec480bb189bd97c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:01.427304    1595 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key ...
	I0918 12:38:01.427308    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key: {Name:mkd2d8ba6baffdc861043f2f387d5531a9337ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:01.427480    1595 certs.go:256] generating profile certs ...
	I0918 12:38:01.427521    1595 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.key
	I0918 12:38:01.427529    1595 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt with IP's: []
	I0918 12:38:01.542315    1595 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt ...
	I0918 12:38:01.542319    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: {Name:mk11b2819e3444077580186131efadbcd51781d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:01.542470    1595 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.key ...
	I0918 12:38:01.542473    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.key: {Name:mk811ee6eb4e45873cfd25ab348baac85d95f52a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:01.542599    1595 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.key.8813fca8
	I0918 12:38:01.542610    1595 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.crt.8813fca8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0918 12:38:01.667296    1595 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.crt.8813fca8 ...
	I0918 12:38:01.667302    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.crt.8813fca8: {Name:mkcad818bb67639f926775c8e430b51e39b156c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:01.667479    1595 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.key.8813fca8 ...
	I0918 12:38:01.667483    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.key.8813fca8: {Name:mk93cdb868ebd8e84c8f8f6945541b9c365f97a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:01.667644    1595 certs.go:381] copying /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.crt.8813fca8 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.crt
	I0918 12:38:01.667773    1595 certs.go:385] copying /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.key.8813fca8 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.key
	I0918 12:38:01.667887    1595 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/proxy-client.key
	I0918 12:38:01.667897    1595 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/proxy-client.crt with IP's: []
	I0918 12:38:01.747850    1595 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/proxy-client.crt ...
	I0918 12:38:01.747860    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/proxy-client.crt: {Name:mke4dd7c764d975cf8a44f73ac99813c3d163d44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:01.748090    1595 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/proxy-client.key ...
	I0918 12:38:01.748095    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/proxy-client.key: {Name:mk6f7ec35892816e100ac6de897a0457edcad1d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:01.748407    1595 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 12:38:01.748441    1595 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem (1082 bytes)
	I0918 12:38:01.748469    1595 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem (1123 bytes)
	I0918 12:38:01.748496    1595 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem (1679 bytes)
	I0918 12:38:01.748858    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 12:38:01.757470    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 12:38:01.765575    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 12:38:01.773571    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 12:38:01.781550    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0918 12:38:01.789575    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 12:38:01.797706    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 12:38:01.805653    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 12:38:01.813591    1595 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 12:38:01.821594    1595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 12:38:01.828391    1595 ssh_runner.go:195] Run: openssl version
	I0918 12:38:01.830560    1595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 12:38:01.834090    1595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 12:38:01.835590    1595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
	I0918 12:38:01.835618    1595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 12:38:01.837663    1595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 12:38:01.841039    1595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 12:38:01.842415    1595 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 12:38:01.842457    1595 kubeadm.go:392] StartCluster: {Name:addons-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 12:38:01.842532    1595 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 12:38:01.847775    1595 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 12:38:01.851724    1595 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 12:38:01.855368    1595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 12:38:01.858932    1595 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 12:38:01.858938    1595 kubeadm.go:157] found existing configuration files:
	
	I0918 12:38:01.858967    1595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 12:38:01.862202    1595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 12:38:01.862234    1595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 12:38:01.865329    1595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 12:38:01.873224    1595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 12:38:01.873288    1595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 12:38:01.876970    1595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 12:38:01.880885    1595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 12:38:01.880927    1595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 12:38:01.884530    1595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 12:38:01.887835    1595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 12:38:01.887873    1595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 12:38:01.891552    1595 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 12:38:01.914736    1595 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 12:38:01.914768    1595 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 12:38:01.951829    1595 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 12:38:01.951931    1595 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 12:38:01.951984    1595 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 12:38:01.956116    1595 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 12:38:01.960865    1595 out.go:235]   - Generating certificates and keys ...
	I0918 12:38:01.960907    1595 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 12:38:01.960938    1595 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 12:38:02.100621    1595 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 12:38:02.269028    1595 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 12:38:02.528882    1595 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 12:38:02.675962    1595 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 12:38:02.901598    1595 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 12:38:02.901662    1595 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-476000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0918 12:38:03.041808    1595 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 12:38:03.041892    1595 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-476000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0918 12:38:03.200618    1595 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 12:38:03.303449    1595 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 12:38:03.465146    1595 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 12:38:03.465188    1595 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 12:38:03.566291    1595 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 12:38:03.641231    1595 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 12:38:03.774315    1595 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 12:38:03.877064    1595 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 12:38:03.933786    1595 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 12:38:03.933977    1595 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 12:38:03.935171    1595 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 12:38:03.938818    1595 out.go:235]   - Booting up control plane ...
	I0918 12:38:03.938865    1595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 12:38:03.938910    1595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 12:38:03.938947    1595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 12:38:03.942697    1595 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 12:38:03.945406    1595 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 12:38:03.945429    1595 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 12:38:04.034795    1595 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 12:38:04.034860    1595 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 12:38:04.537041    1595 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.602583ms
	I0918 12:38:04.537220    1595 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 12:38:07.540158    1595 kubeadm.go:310] [api-check] The API server is healthy after 3.003856502s
	I0918 12:38:07.549483    1595 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 12:38:07.558084    1595 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 12:38:07.567182    1595 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 12:38:07.567348    1595 kubeadm.go:310] [mark-control-plane] Marking the node addons-476000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 12:38:07.572536    1595 kubeadm.go:310] [bootstrap-token] Using token: igt04x.8vqvv7jvg6qyv33i
	I0918 12:38:07.578964    1595 out.go:235]   - Configuring RBAC rules ...
	I0918 12:38:07.579019    1595 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 12:38:07.579875    1595 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 12:38:07.585396    1595 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 12:38:07.586305    1595 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 12:38:07.587490    1595 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 12:38:07.588325    1595 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 12:38:07.951275    1595 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 12:38:08.351561    1595 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 12:38:08.945722    1595 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 12:38:08.946488    1595 kubeadm.go:310] 
	I0918 12:38:08.946559    1595 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 12:38:08.946564    1595 kubeadm.go:310] 
	I0918 12:38:08.946678    1595 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 12:38:08.946717    1595 kubeadm.go:310] 
	I0918 12:38:08.946743    1595 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 12:38:08.946865    1595 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 12:38:08.946931    1595 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 12:38:08.946949    1595 kubeadm.go:310] 
	I0918 12:38:08.947035    1595 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 12:38:08.947041    1595 kubeadm.go:310] 
	I0918 12:38:08.947086    1595 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 12:38:08.947092    1595 kubeadm.go:310] 
	I0918 12:38:08.947143    1595 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 12:38:08.947239    1595 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 12:38:08.947309    1595 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 12:38:08.947318    1595 kubeadm.go:310] 
	I0918 12:38:08.947415    1595 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 12:38:08.947542    1595 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 12:38:08.947552    1595 kubeadm.go:310] 
	I0918 12:38:08.947680    1595 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token igt04x.8vqvv7jvg6qyv33i \
	I0918 12:38:08.947786    1595 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:491fed232b633ec8404b91d551b715c799429ab9f4658c5350f7586533e73a75 \
	I0918 12:38:08.947813    1595 kubeadm.go:310] 	--control-plane 
	I0918 12:38:08.947828    1595 kubeadm.go:310] 
	I0918 12:38:08.947917    1595 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 12:38:08.947928    1595 kubeadm.go:310] 
	I0918 12:38:08.948018    1595 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token igt04x.8vqvv7jvg6qyv33i \
	I0918 12:38:08.948126    1595 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:491fed232b633ec8404b91d551b715c799429ab9f4658c5350f7586533e73a75 
	I0918 12:38:08.948503    1595 kubeadm.go:310] W0918 19:38:02.401511    1591 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 12:38:08.948850    1595 kubeadm.go:310] W0918 19:38:02.401800    1591 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 12:38:08.948964    1595 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 12:38:08.948979    1595 cni.go:84] Creating CNI manager for ""
	I0918 12:38:08.948994    1595 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:38:08.953962    1595 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 12:38:08.957955    1595 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 12:38:08.965606    1595 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 12:38:08.976974    1595 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 12:38:08.977090    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:38:08.977133    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-476000 minikube.k8s.io/updated_at=2024_09_18T12_38_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=addons-476000 minikube.k8s.io/primary=true
	I0918 12:38:08.983091    1595 ops.go:34] apiserver oom_adj: -16
	I0918 12:38:09.042812    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:38:09.544894    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:38:10.044884    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:38:10.543487    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:38:11.044838    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:38:11.544803    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:38:12.044869    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:38:12.544781    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:38:13.044906    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:38:13.544433    1595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 12:38:13.591601    1595 kubeadm.go:1113] duration metric: took 4.614755917s to wait for elevateKubeSystemPrivileges
	I0918 12:38:13.591616    1595 kubeadm.go:394] duration metric: took 11.749526125s to StartCluster
	I0918 12:38:13.591627    1595 settings.go:142] acquiring lock: {Name:mkbb043d0459391a7d922bd686e90e22968feef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:13.591783    1595 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 12:38:13.591991    1595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/kubeconfig: {Name:mkc39e19086c32e3258f75506afcbcc582926b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:38:13.592246    1595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 12:38:13.592276    1595 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 12:38:13.592322    1595 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0918 12:38:13.592405    1595 addons.go:69] Setting yakd=true in profile "addons-476000"
	I0918 12:38:13.592406    1595 addons.go:69] Setting gcp-auth=true in profile "addons-476000"
	I0918 12:38:13.592414    1595 addons.go:234] Setting addon yakd=true in "addons-476000"
	I0918 12:38:13.592419    1595 mustload.go:65] Loading cluster: addons-476000
	I0918 12:38:13.592426    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.592415    1595 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-476000"
	I0918 12:38:13.592447    1595 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-476000"
	I0918 12:38:13.592461    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.592497    1595 config.go:182] Loaded profile config "addons-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 12:38:13.592487    1595 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-476000"
	I0918 12:38:13.592513    1595 addons.go:69] Setting cloud-spanner=true in profile "addons-476000"
	I0918 12:38:13.592520    1595 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-476000"
	I0918 12:38:13.592523    1595 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-476000"
	I0918 12:38:13.592528    1595 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-476000"
	I0918 12:38:13.592538    1595 addons.go:69] Setting storage-provisioner=true in profile "addons-476000"
	I0918 12:38:13.592551    1595 addons.go:69] Setting registry=true in profile "addons-476000"
	I0918 12:38:13.592557    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.592561    1595 addons.go:69] Setting ingress-dns=true in profile "addons-476000"
	I0918 12:38:13.592567    1595 addons.go:69] Setting metrics-server=true in profile "addons-476000"
	I0918 12:38:13.592571    1595 addons.go:69] Setting ingress=true in profile "addons-476000"
	I0918 12:38:13.592577    1595 addons.go:234] Setting addon ingress=true in "addons-476000"
	I0918 12:38:13.592582    1595 addons.go:69] Setting inspektor-gadget=true in profile "addons-476000"
	I0918 12:38:13.592591    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.592592    1595 addons.go:234] Setting addon inspektor-gadget=true in "addons-476000"
	I0918 12:38:13.592619    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.592647    1595 addons.go:234] Setting addon ingress-dns=true in "addons-476000"
	I0918 12:38:13.592658    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.592515    1595 config.go:182] Loaded profile config "addons-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 12:38:13.592892    1595 retry.go:31] will retry after 1.327103359s: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.592562    1595 addons.go:69] Setting volcano=true in profile "addons-476000"
	I0918 12:38:13.592910    1595 retry.go:31] will retry after 671.166936ms: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.592929    1595 retry.go:31] will retry after 1.03722868s: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.592577    1595 addons.go:234] Setting addon metrics-server=true in "addons-476000"
	I0918 12:38:13.592938    1595 addons.go:69] Setting volumesnapshots=true in profile "addons-476000"
	I0918 12:38:13.592942    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.592528    1595 addons.go:234] Setting addon cloud-spanner=true in "addons-476000"
	I0918 12:38:13.592951    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.592552    1595 addons.go:234] Setting addon storage-provisioner=true in "addons-476000"
	I0918 12:38:13.593029    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.593178    1595 retry.go:31] will retry after 676.947054ms: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.593193    1595 retry.go:31] will retry after 1.035286229s: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.593193    1595 retry.go:31] will retry after 1.131134622s: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.592918    1595 addons.go:234] Setting addon volcano=true in "addons-476000"
	I0918 12:38:13.593244    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.592943    1595 addons.go:234] Setting addon volumesnapshots=true in "addons-476000"
	I0918 12:38:13.592558    1595 addons.go:234] Setting addon registry=true in "addons-476000"
	I0918 12:38:13.593341    1595 retry.go:31] will retry after 1.254344477s: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.593345    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.592479    1595 addons.go:69] Setting default-storageclass=true in profile "addons-476000"
	I0918 12:38:13.593347    1595 retry.go:31] will retry after 586.079619ms: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.593231    1595 retry.go:31] will retry after 920.222658ms: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.593374    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.593403    1595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-476000"
	I0918 12:38:13.593500    1595 retry.go:31] will retry after 522.692798ms: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.593596    1595 retry.go:31] will retry after 955.444829ms: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.593663    1595 retry.go:31] will retry after 813.339284ms: connect: dial unix /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/monitor: connect: connection refused
	I0918 12:38:13.594960    1595 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-476000"
	I0918 12:38:13.596159    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:13.595805    1595 out.go:177] * Verifying Kubernetes components...
	I0918 12:38:13.603675    1595 out.go:177]   - Using image docker.io/registry:2.8.3
	I0918 12:38:13.603675    1595 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0918 12:38:13.608737    1595 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0918 12:38:13.608792    1595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 12:38:13.614781    1595 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 12:38:13.615104    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0918 12:38:13.615114    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:13.620747    1595 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0918 12:38:13.623749    1595 out.go:177]   - Using image docker.io/busybox:stable
	I0918 12:38:13.626766    1595 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0918 12:38:13.626772    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0918 12:38:13.626782    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:13.631841    1595 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 12:38:13.631852    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0918 12:38:13.631861    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:13.657780    1595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 12:38:13.714014    1595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 12:38:13.733852    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 12:38:13.804040    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 12:38:13.827427    1595 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0918 12:38:13.827440    1595 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0918 12:38:13.835244    1595 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0918 12:38:13.835257    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0918 12:38:13.841946    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0918 12:38:13.911563    1595 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0918 12:38:13.912026    1595 node_ready.go:35] waiting up to 6m0s for node "addons-476000" to be "Ready" ...
	I0918 12:38:13.918921    1595 node_ready.go:49] node "addons-476000" has status "Ready":"True"
	I0918 12:38:13.918939    1595 node_ready.go:38] duration metric: took 6.89375ms for node "addons-476000" to be "Ready" ...
	I0918 12:38:13.918942    1595 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 12:38:13.923684    1595 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-476000" in "kube-system" namespace to be "Ready" ...
	I0918 12:38:14.120947    1595 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0918 12:38:14.130656    1595 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0918 12:38:14.140695    1595 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0918 12:38:14.144160    1595 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 12:38:14.144171    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0918 12:38:14.144182    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:14.168170    1595 addons.go:475] Verifying addon registry=true in "addons-476000"
	I0918 12:38:14.172043    1595 out.go:177] * Verifying registry addon...
	I0918 12:38:14.179200    1595 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 12:38:14.182725    1595 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0918 12:38:14.185763    1595 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 12:38:14.185778    1595 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 12:38:14.185789    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:14.186084    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 12:38:14.186411    1595 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 12:38:14.186416    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:14.266559    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:14.273777    1595 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0918 12:38:14.276673    1595 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0918 12:38:14.276688    1595 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0918 12:38:14.276699    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:14.281109    1595 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 12:38:14.281123    1595 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 12:38:14.293690    1595 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 12:38:14.293701    1595 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 12:38:14.323149    1595 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 12:38:14.323164    1595 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 12:38:14.334732    1595 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 12:38:14.334743    1595 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 12:38:14.344503    1595 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 12:38:14.344519    1595 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 12:38:14.353295    1595 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 12:38:14.353307    1595 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 12:38:14.360674    1595 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 12:38:14.360681    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0918 12:38:14.360738    1595 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0918 12:38:14.360742    1595 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0918 12:38:14.367030    1595 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0918 12:38:14.367039    1595 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0918 12:38:14.373988    1595 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0918 12:38:14.373999    1595 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0918 12:38:14.380510    1595 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0918 12:38:14.380518    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0918 12:38:14.408259    1595 addons.go:234] Setting addon default-storageclass=true in "addons-476000"
	I0918 12:38:14.408280    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:14.408860    1595 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 12:38:14.408867    1595 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 12:38:14.408873    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:14.412074    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 12:38:14.413180    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0918 12:38:14.413407    1595 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-476000" context rescaled to 1 replicas
	I0918 12:38:14.606812    1595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 12:38:14.612752    1595 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 12:38:14.622676    1595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 12:38:14.622676    1595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 12:38:14.622752    1595 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 12:38:14.622764    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:14.635618    1595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 12:38:14.641727    1595 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0918 12:38:14.644627    1595 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0918 12:38:14.647733    1595 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 12:38:14.647745    1595 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 12:38:14.647757    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:14.651556    1595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 12:38:14.651614    1595 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 12:38:14.651621    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0918 12:38:14.651630    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:14.659719    1595 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 12:38:14.666704    1595 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 12:38:14.673707    1595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 12:38:14.675256    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 12:38:14.681237    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:14.685705    1595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 12:38:14.689740    1595 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 12:38:14.689751    1595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 12:38:14.689764    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:14.727712    1595 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0918 12:38:14.731713    1595 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0918 12:38:14.731722    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0918 12:38:14.731733    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:14.851773    1595 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 12:38:14.856108    1595 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 12:38:14.856118    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 12:38:14.856129    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:14.909141    1595 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 12:38:14.909152    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 12:38:14.918881    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 12:38:14.924582    1595 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0918 12:38:14.929022    1595 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 12:38:14.935663    1595 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 12:38:14.938841    1595 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 12:38:14.938851    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0918 12:38:14.938861    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:14.972427    1595 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 12:38:14.972441    1595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 12:38:14.995699    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0918 12:38:14.999164    1595 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 12:38:14.999177    1595 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 12:38:15.025701    1595 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 12:38:15.025715    1595 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 12:38:15.081134    1595 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 12:38:15.081151    1595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 12:38:15.084022    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 12:38:15.090841    1595 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 12:38:15.090857    1595 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 12:38:15.119332    1595 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 12:38:15.119345    1595 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 12:38:15.129520    1595 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 12:38:15.129534    1595 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 12:38:15.146612    1595 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 12:38:15.146625    1595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 12:38:15.171016    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 12:38:15.185051    1595 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 12:38:15.185068    1595 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 12:38:15.186342    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:15.201303    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 12:38:15.223231    1595 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 12:38:15.223243    1595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 12:38:15.228056    1595 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 12:38:15.228064    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 12:38:15.315643    1595 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 12:38:15.315655    1595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 12:38:15.370790    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 12:38:15.376576    1595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 12:38:15.376586    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 12:38:15.418454    1595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 12:38:15.418471    1595 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 12:38:15.577709    1595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 12:38:15.577719    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 12:38:15.682761    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:15.713956    1595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 12:38:15.713967    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 12:38:15.737146    1595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 12:38:15.737158    1595 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 12:38:15.753594    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 12:38:15.935889    1595 pod_ready.go:103] pod "etcd-addons-476000" in "kube-system" namespace has status "Ready":"False"
	I0918 12:38:16.272408    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:16.687397    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:17.204813    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:17.744862    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:17.942014    1595 pod_ready.go:93] pod "etcd-addons-476000" in "kube-system" namespace has status "Ready":"True"
	I0918 12:38:17.942024    1595 pod_ready.go:82] duration metric: took 4.018451917s for pod "etcd-addons-476000" in "kube-system" namespace to be "Ready" ...
	I0918 12:38:17.942029    1595 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-476000" in "kube-system" namespace to be "Ready" ...
	I0918 12:38:18.166911    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.980935334s)
	I0918 12:38:18.166957    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.754987791s)
	I0918 12:38:18.166977    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.753904041s)
	I0918 12:38:18.167038    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.4918815s)
	I0918 12:38:18.167054    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.248264084s)
	I0918 12:38:18.167137    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.171526083s)
	I0918 12:38:18.167189    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.083249417s)
	I0918 12:38:18.174622    1595 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-476000 service yakd-dashboard -n yakd-dashboard
	
	I0918 12:38:18.240410    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:18.477609    1595 pod_ready.go:93] pod "kube-apiserver-addons-476000" in "kube-system" namespace has status "Ready":"True"
	I0918 12:38:18.477620    1595 pod_ready.go:82] duration metric: took 535.603958ms for pod "kube-apiserver-addons-476000" in "kube-system" namespace to be "Ready" ...
	I0918 12:38:18.477626    1595 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-476000" in "kube-system" namespace to be "Ready" ...
	I0918 12:38:18.520270    1595 pod_ready.go:93] pod "kube-controller-manager-addons-476000" in "kube-system" namespace has status "Ready":"True"
	I0918 12:38:18.520280    1595 pod_ready.go:82] duration metric: took 42.651833ms for pod "kube-controller-manager-addons-476000" in "kube-system" namespace to be "Ready" ...
	I0918 12:38:18.520285    1595 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k82t4" in "kube-system" namespace to be "Ready" ...
	I0918 12:38:18.530842    1595 pod_ready.go:93] pod "kube-proxy-k82t4" in "kube-system" namespace has status "Ready":"True"
	I0918 12:38:18.530852    1595 pod_ready.go:82] duration metric: took 10.564417ms for pod "kube-proxy-k82t4" in "kube-system" namespace to be "Ready" ...
	I0918 12:38:18.530871    1595 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-476000" in "kube-system" namespace to be "Ready" ...
	I0918 12:38:18.576306    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.405378041s)
	I0918 12:38:18.576324    1595 addons.go:475] Verifying addon ingress=true in "addons-476000"
	I0918 12:38:18.576341    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.375129458s)
	I0918 12:38:18.576355    1595 addons.go:475] Verifying addon metrics-server=true in "addons-476000"
	I0918 12:38:18.576429    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.20572575s)
	W0918 12:38:18.576469    1595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 12:38:18.576485    1595 retry.go:31] will retry after 288.975686ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 12:38:18.582619    1595 out.go:177] * Verifying ingress addon...
	I0918 12:38:18.595321    1595 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 12:38:18.597570    1595 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 12:38:18.703414    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:18.768705    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.015178375s)
	I0918 12:38:18.768725    1595 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-476000"
	I0918 12:38:18.773616    1595 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 12:38:18.779917    1595 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 12:38:18.789271    1595 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 12:38:18.789280    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:18.867619    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 12:38:19.233786    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:19.337820    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:19.702513    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:19.784291    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:20.182867    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:20.283991    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:20.535656    1595 pod_ready.go:103] pod "kube-scheduler-addons-476000" in "kube-system" namespace has status "Ready":"False"
	I0918 12:38:20.702348    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:20.784244    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:21.273458    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:21.283236    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:21.352444    1595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.484878958s)
	I0918 12:38:21.714344    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:21.784351    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:21.873282    1595 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 12:38:21.873300    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:21.909622    1595 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 12:38:21.915844    1595 addons.go:234] Setting addon gcp-auth=true in "addons-476000"
	I0918 12:38:21.915862    1595 host.go:66] Checking if "addons-476000" exists ...
	I0918 12:38:21.916557    1595 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 12:38:21.916564    1595 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/addons-476000/id_rsa Username:docker}
	I0918 12:38:21.949061    1595 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 12:38:21.952941    1595 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0918 12:38:21.959959    1595 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 12:38:21.959965    1595 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 12:38:21.966393    1595 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 12:38:21.966398    1595 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 12:38:21.972389    1595 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 12:38:21.972395    1595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0918 12:38:21.978751    1595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 12:38:22.035778    1595 pod_ready.go:93] pod "kube-scheduler-addons-476000" in "kube-system" namespace has status "Ready":"True"
	I0918 12:38:22.035787    1595 pod_ready.go:82] duration metric: took 3.505020875s for pod "kube-scheduler-addons-476000" in "kube-system" namespace to be "Ready" ...
	I0918 12:38:22.035791    1595 pod_ready.go:39] duration metric: took 8.117093959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 12:38:22.035799    1595 api_server.go:52] waiting for apiserver process to appear ...
	I0918 12:38:22.035859    1595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 12:38:22.159640    1595 api_server.go:72] duration metric: took 8.567616083s to wait for apiserver process to appear ...
	I0918 12:38:22.159656    1595 api_server.go:88] waiting for apiserver healthz status ...
	I0918 12:38:22.159665    1595 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0918 12:38:22.160529    1595 addons.go:475] Verifying addon gcp-auth=true in "addons-476000"
	I0918 12:38:22.162185    1595 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0918 12:38:22.162640    1595 api_server.go:141] control plane version: v1.31.1
	I0918 12:38:22.162647    1595 api_server.go:131] duration metric: took 2.987958ms to wait for apiserver health ...
	I0918 12:38:22.162657    1595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 12:38:22.165421    1595 out.go:177] * Verifying gcp-auth addon...
	I0918 12:38:22.175874    1595 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 12:38:22.203122    1595 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 12:38:22.203444    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:22.205271    1595 system_pods.go:59] 17 kube-system pods found
	I0918 12:38:22.205277    1595 system_pods.go:61] "coredns-7c65d6cfc9-5cssv" [1dd1c64a-e5d7-4a44-ba2c-1f9fd2360362] Running
	I0918 12:38:22.205281    1595 system_pods.go:61] "csi-hostpath-attacher-0" [0067298e-1486-48b7-b680-d4d24a513671] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 12:38:22.205286    1595 system_pods.go:61] "csi-hostpath-resizer-0" [53fdf8ca-2b4e-4d52-a0de-4338cad0f52d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 12:38:22.205291    1595 system_pods.go:61] "csi-hostpathplugin-zc4h8" [3f809175-9a45-4455-8be4-1a2386d1b7fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 12:38:22.205295    1595 system_pods.go:61] "etcd-addons-476000" [c9193b75-3e82-4d63-ade6-a6d1fa30c798] Running
	I0918 12:38:22.205297    1595 system_pods.go:61] "kube-apiserver-addons-476000" [38bd2e63-0ecc-4fdb-8722-3a21f9ee89a2] Running
	I0918 12:38:22.205299    1595 system_pods.go:61] "kube-controller-manager-addons-476000" [c43acaf1-8b03-4bff-8e3e-c1ba022ab188] Running
	I0918 12:38:22.205301    1595 system_pods.go:61] "kube-ingress-dns-minikube" [59dc7b9a-0507-4774-bfd4-c129bcf57832] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0918 12:38:22.205302    1595 system_pods.go:61] "kube-proxy-k82t4" [f99c8912-731f-4612-a6b8-0617f5415e5c] Running
	I0918 12:38:22.205304    1595 system_pods.go:61] "kube-scheduler-addons-476000" [6e5f27f6-577a-44c4-a6d1-fc1653469552] Running
	I0918 12:38:22.205306    1595 system_pods.go:61] "metrics-server-84c5f94fbc-4jn9k" [32f905d8-b16d-4b06-842f-d4fd0ea4a6d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 12:38:22.205309    1595 system_pods.go:61] "nvidia-device-plugin-daemonset-fmdxx" [d1efddf9-af8b-411f-b498-b4c94a38e667] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0918 12:38:22.205312    1595 system_pods.go:61] "registry-66c9cd494c-zqh2d" [2cd0b0e2-c98a-477c-9973-6e010d122199] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 12:38:22.205314    1595 system_pods.go:61] "registry-proxy-pgsgm" [733c8b1c-39a5-4634-92d4-ac15f0f79484] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 12:38:22.205318    1595 system_pods.go:61] "snapshot-controller-56fcc65765-jwxf9" [a7aa1aaf-0afd-41c7-a45d-5cdc7184cdac] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 12:38:22.205321    1595 system_pods.go:61] "snapshot-controller-56fcc65765-pzs84" [20998355-fa3d-4676-82bb-738c5e33290b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 12:38:22.205323    1595 system_pods.go:61] "storage-provisioner" [1ba6eb06-ed7f-4ef6-a4b5-5c101eb24444] Running
	I0918 12:38:22.205326    1595 system_pods.go:74] duration metric: took 42.66675ms to wait for pod list to return data ...
	I0918 12:38:22.205329    1595 default_sa.go:34] waiting for default service account to be created ...
	I0918 12:38:22.206425    1595 default_sa.go:45] found service account: "default"
	I0918 12:38:22.206430    1595 default_sa.go:55] duration metric: took 1.098875ms for default service account to be created ...
	I0918 12:38:22.206433    1595 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 12:38:22.210520    1595 system_pods.go:86] 17 kube-system pods found
	I0918 12:38:22.210527    1595 system_pods.go:89] "coredns-7c65d6cfc9-5cssv" [1dd1c64a-e5d7-4a44-ba2c-1f9fd2360362] Running
	I0918 12:38:22.210531    1595 system_pods.go:89] "csi-hostpath-attacher-0" [0067298e-1486-48b7-b680-d4d24a513671] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 12:38:22.210534    1595 system_pods.go:89] "csi-hostpath-resizer-0" [53fdf8ca-2b4e-4d52-a0de-4338cad0f52d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 12:38:22.210537    1595 system_pods.go:89] "csi-hostpathplugin-zc4h8" [3f809175-9a45-4455-8be4-1a2386d1b7fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 12:38:22.210539    1595 system_pods.go:89] "etcd-addons-476000" [c9193b75-3e82-4d63-ade6-a6d1fa30c798] Running
	I0918 12:38:22.210541    1595 system_pods.go:89] "kube-apiserver-addons-476000" [38bd2e63-0ecc-4fdb-8722-3a21f9ee89a2] Running
	I0918 12:38:22.210544    1595 system_pods.go:89] "kube-controller-manager-addons-476000" [c43acaf1-8b03-4bff-8e3e-c1ba022ab188] Running
	I0918 12:38:22.210547    1595 system_pods.go:89] "kube-ingress-dns-minikube" [59dc7b9a-0507-4774-bfd4-c129bcf57832] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0918 12:38:22.210549    1595 system_pods.go:89] "kube-proxy-k82t4" [f99c8912-731f-4612-a6b8-0617f5415e5c] Running
	I0918 12:38:22.210551    1595 system_pods.go:89] "kube-scheduler-addons-476000" [6e5f27f6-577a-44c4-a6d1-fc1653469552] Running
	I0918 12:38:22.210554    1595 system_pods.go:89] "metrics-server-84c5f94fbc-4jn9k" [32f905d8-b16d-4b06-842f-d4fd0ea4a6d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 12:38:22.210557    1595 system_pods.go:89] "nvidia-device-plugin-daemonset-fmdxx" [d1efddf9-af8b-411f-b498-b4c94a38e667] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0918 12:38:22.210560    1595 system_pods.go:89] "registry-66c9cd494c-zqh2d" [2cd0b0e2-c98a-477c-9973-6e010d122199] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0918 12:38:22.210563    1595 system_pods.go:89] "registry-proxy-pgsgm" [733c8b1c-39a5-4634-92d4-ac15f0f79484] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 12:38:22.210565    1595 system_pods.go:89] "snapshot-controller-56fcc65765-jwxf9" [a7aa1aaf-0afd-41c7-a45d-5cdc7184cdac] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 12:38:22.210577    1595 system_pods.go:89] "snapshot-controller-56fcc65765-pzs84" [20998355-fa3d-4676-82bb-738c5e33290b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 12:38:22.210581    1595 system_pods.go:89] "storage-provisioner" [1ba6eb06-ed7f-4ef6-a4b5-5c101eb24444] Running
	I0918 12:38:22.210584    1595 system_pods.go:126] duration metric: took 4.148375ms to wait for k8s-apps to be running ...
	I0918 12:38:22.210588    1595 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 12:38:22.210635    1595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 12:38:22.216609    1595 system_svc.go:56] duration metric: took 6.019333ms WaitForService to wait for kubelet
	I0918 12:38:22.216618    1595 kubeadm.go:582] duration metric: took 8.624599333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 12:38:22.216628    1595 node_conditions.go:102] verifying NodePressure condition ...
	I0918 12:38:22.218265    1595 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0918 12:38:22.218271    1595 node_conditions.go:123] node cpu capacity is 2
	I0918 12:38:22.218277    1595 node_conditions.go:105] duration metric: took 1.646542ms to run NodePressure ...
	I0918 12:38:22.218282    1595 start.go:241] waiting for startup goroutines ...
	I0918 12:38:22.305554    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:22.702026    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:22.783930    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:23.200563    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:23.303610    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:23.680563    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:23.784432    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:24.184554    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:24.287637    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:24.680420    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:24.783868    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:25.180199    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:25.283908    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:25.680394    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:25.783945    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:26.180768    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:26.284968    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:26.680181    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:26.784181    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:27.180273    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:27.284050    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:27.680415    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:27.784157    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:28.180235    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:28.284174    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:28.680301    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:28.977272    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:29.201004    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 12:38:29.302505    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:29.681024    1595 kapi.go:107] duration metric: took 15.502321916s to wait for kubernetes.io/minikube-addons=registry ...
	I0918 12:38:29.785556    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:30.286861    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:30.783874    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:31.284534    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:31.784020    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:32.283993    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:32.784076    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:33.311579    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:33.783577    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:34.287844    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:34.783976    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:35.283964    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:35.783744    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:36.283824    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:36.783771    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:37.286601    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:37.801727    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:38.283515    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:38.784206    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:39.283570    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:39.783889    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:40.283982    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:40.783569    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:41.283711    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:41.783805    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:42.282225    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:42.784022    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:43.283732    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:43.785064    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:44.283741    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:44.783509    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:45.283756    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:45.783882    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:46.283296    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:46.783292    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:47.282718    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:47.783531    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:48.286169    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:48.781963    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:49.283977    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:49.783371    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:50.283289    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:50.783240    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:51.283443    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:51.782497    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:52.283571    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:52.783881    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:53.282999    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:53.797640    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:54.284724    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:54.785937    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:55.281419    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:55.783230    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:56.284032    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:56.783840    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:57.282906    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:57.782647    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:58.282654    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:58.783163    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:59.282851    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:38:59.782656    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:00.282586    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:00.781661    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:01.283520    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:01.783400    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:02.282754    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:02.780782    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:03.283191    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:03.782626    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:04.283400    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:04.782721    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:05.283110    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:05.782758    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:06.300006    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:06.782711    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:07.283006    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:07.782966    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:08.283217    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:08.782736    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:09.298899    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:09.782623    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:10.285106    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:10.782443    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:11.282833    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:11.782663    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:12.283139    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:12.788353    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 12:39:13.283235    1595 kapi.go:107] duration metric: took 54.505006375s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 12:39:41.097563    1595 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 12:39:41.097575    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:41.597127    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:42.096934    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:42.602874    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:43.097744    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:43.597152    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:44.098100    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:44.597403    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:44.676859    1595 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 12:39:44.676870    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:45.097273    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:45.177179    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:45.597328    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:45.676476    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:46.097323    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:46.177883    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:46.609062    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:46.679461    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:47.098466    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:47.178259    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:47.600916    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:47.678396    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:48.097017    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:48.177424    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:48.597331    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:48.677003    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:49.096293    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:49.177155    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:49.596265    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:49.696951    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:50.094917    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:50.176756    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:50.597841    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:50.676930    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:51.094414    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:51.176716    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:51.597445    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:51.676736    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:52.097272    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:52.177621    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:52.597919    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:52.677084    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:53.103335    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:53.181476    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:53.600547    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:53.679247    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:54.095501    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:54.178094    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:54.596940    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:54.677927    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:55.096744    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:55.176842    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:55.600943    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:55.680456    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:56.097876    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:56.177626    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:56.602964    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:56.677447    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:57.096534    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:57.177261    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:57.599991    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:57.677400    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:58.098730    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:58.177149    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:58.596264    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:58.676524    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:59.096080    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:59.176347    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:39:59.596834    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:39:59.676803    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:00.097192    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:00.178467    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:00.604983    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:00.677361    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:01.096940    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:01.177303    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:01.596666    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:01.676421    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:02.096204    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:02.174956    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:02.598991    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:02.676169    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:03.098058    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:03.176969    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:03.597633    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:03.678771    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:04.100595    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:04.178504    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:04.602559    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:04.677534    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:05.099384    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:05.177886    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:05.597105    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:05.676354    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:06.099067    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:06.177648    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:06.600522    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:06.676428    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:07.096510    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:07.176569    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:07.597623    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:07.676951    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:08.101019    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:08.180897    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:08.599933    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:08.676359    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:09.096050    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:09.175921    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:09.597931    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:09.677386    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:10.097275    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:10.178526    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:10.603297    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:10.677840    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:11.099860    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:11.178930    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:11.597507    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:11.677159    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:12.095374    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:12.176691    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:12.597445    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:12.676411    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:13.100889    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:13.178445    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:13.595637    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:13.674944    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:14.095123    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:14.177850    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:14.602468    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:14.677730    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:15.096732    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:15.176649    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:15.594990    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:15.676841    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:16.100298    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:16.178381    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:16.600742    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:16.676897    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:17.097778    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:17.182612    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:17.597483    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:17.677002    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:18.100945    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:18.179507    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:18.596078    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:18.676119    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:19.095493    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:19.176003    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:19.596299    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:19.676200    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:20.098688    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:20.180521    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:20.599806    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:20.676676    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:21.100604    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:21.177088    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:21.594517    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:21.676046    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:22.095692    1595 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 12:40:22.095699    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:22.176059    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:22.594110    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:22.676187    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:23.095698    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:23.173823    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:23.594423    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:23.675631    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:24.093703    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:24.175736    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:24.595673    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:24.675573    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:25.095668    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:25.175858    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:25.596603    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:25.675488    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:26.095874    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:26.175467    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:26.595642    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:26.675641    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:27.097571    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:27.180740    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:27.595712    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:27.674762    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:28.097691    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:28.177561    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:28.596422    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:28.678308    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:29.095572    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:29.175750    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:29.598926    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:29.698331    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:30.097651    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:30.176875    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:30.605832    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:30.676848    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:31.100914    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:31.177681    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:31.594689    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:31.675496    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:32.095730    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:32.175959    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:32.610280    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:32.677436    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:33.095704    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:33.175762    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:33.594827    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:33.676324    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:34.095795    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:34.175086    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:34.595714    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:34.677250    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:35.097122    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:35.176278    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:35.597978    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:35.675437    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:36.096182    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:36.175581    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:36.597478    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:36.675282    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:37.097371    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:37.176186    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:37.596907    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:37.680314    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:38.096996    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:38.176308    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:38.596416    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:38.675158    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:39.095758    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:39.175383    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:39.595845    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:39.675134    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:40.095799    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:40.174570    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:40.599042    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:40.676093    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:41.098012    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:41.177990    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:41.596868    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:41.696480    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:42.097548    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:42.197511    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:42.599113    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:42.677286    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:43.096500    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:43.176037    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:43.597722    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:43.676989    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:44.097354    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:44.176663    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:44.596592    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:44.675331    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:45.095064    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:45.174772    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:45.595038    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:45.674948    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:46.094235    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:46.174866    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:46.595194    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:46.675129    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:47.094833    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:47.175471    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:47.594996    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:47.675006    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:48.094902    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:48.174796    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:48.595301    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:48.674962    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:49.094839    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:49.174591    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:49.593133    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:49.674818    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:50.094572    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:50.174768    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:50.594746    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:50.674852    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:51.094826    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:51.174945    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:51.594645    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:51.674847    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:52.095011    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:52.194629    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:52.594879    1595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 12:40:52.675096    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:53.094946    1595 kapi.go:107] duration metric: took 2m34.504414583s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0918 12:40:53.174745    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:53.675084    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:54.241267    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:54.674844    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:55.174990    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:55.674833    1595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 12:40:56.174033    1595 kapi.go:107] duration metric: took 2m34.002936291s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 12:40:56.180217    1595 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-476000 cluster.
	I0918 12:40:56.184248    1595 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 12:40:56.190188    1595 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 12:40:56.196267    1595 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner-rancher, volcano, inspektor-gadget, ingress-dns, cloud-spanner, storage-provisioner, yakd, default-storageclass, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0918 12:40:56.200225    1595 addons.go:510] duration metric: took 2m42.612956916s for enable addons: enabled=[nvidia-device-plugin storage-provisioner-rancher volcano inspektor-gadget ingress-dns cloud-spanner storage-provisioner yakd default-storageclass metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0918 12:40:56.200291    1595 start.go:246] waiting for cluster config update ...
	I0918 12:40:56.200509    1595 start.go:255] writing updated cluster config ...
	I0918 12:40:56.205368    1595 ssh_runner.go:195] Run: rm -f paused
	I0918 12:40:56.357182    1595 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0918 12:40:56.362145    1595 out.go:201] 
	W0918 12:40:56.366213    1595 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0918 12:40:56.370211    1595 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0918 12:40:56.378256    1595 out.go:177] * Done! kubectl is now configured to use "addons-476000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 18 19:50:45 addons-476000 dockerd[1278]: time="2024-09-18T19:50:45.374174587Z" level=info msg="ignoring event" container=71162cfc24c5fb412f4e6fc4ab34a07b4f05443268a4cb723256c22209752e28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.374160087Z" level=info msg="shim disconnected" id=71162cfc24c5fb412f4e6fc4ab34a07b4f05443268a4cb723256c22209752e28 namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.374540920Z" level=warning msg="cleaning up after shim disconnected" id=71162cfc24c5fb412f4e6fc4ab34a07b4f05443268a4cb723256c22209752e28 namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.374558670Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1278]: time="2024-09-18T19:50:45.533172594Z" level=info msg="ignoring event" container=253792d7e3f38d99f9ed5f2a3b0b8d61f20231ede62fe48fb59463bbcae3e734 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.533333011Z" level=info msg="shim disconnected" id=253792d7e3f38d99f9ed5f2a3b0b8d61f20231ede62fe48fb59463bbcae3e734 namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.533363219Z" level=warning msg="cleaning up after shim disconnected" id=253792d7e3f38d99f9ed5f2a3b0b8d61f20231ede62fe48fb59463bbcae3e734 namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.533367344Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1278]: time="2024-09-18T19:50:45.564905462Z" level=info msg="ignoring event" container=e2833a0cba5706d71ff7d2cd9f197750fb5c4527536bee427ef67069bebd66ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.567428628Z" level=info msg="shim disconnected" id=e2833a0cba5706d71ff7d2cd9f197750fb5c4527536bee427ef67069bebd66ed namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.567467961Z" level=warning msg="cleaning up after shim disconnected" id=e2833a0cba5706d71ff7d2cd9f197750fb5c4527536bee427ef67069bebd66ed namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.567472628Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1278]: time="2024-09-18T19:50:45.622333908Z" level=info msg="ignoring event" container=c266cc035c2c90644711c7245ddc2a40a9df471d84a84418996af64bc6d5db52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.622462366Z" level=info msg="shim disconnected" id=c266cc035c2c90644711c7245ddc2a40a9df471d84a84418996af64bc6d5db52 namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.622729741Z" level=warning msg="cleaning up after shim disconnected" id=c266cc035c2c90644711c7245ddc2a40a9df471d84a84418996af64bc6d5db52 namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.622749199Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1278]: time="2024-09-18T19:50:45.680590729Z" level=info msg="ignoring event" container=9b03ce7a6878f259b4866651f38c8269bb74068c4b72c26fa81663a2ac66bcd1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.680548812Z" level=info msg="shim disconnected" id=9b03ce7a6878f259b4866651f38c8269bb74068c4b72c26fa81663a2ac66bcd1 namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.680632895Z" level=warning msg="cleaning up after shim disconnected" id=9b03ce7a6878f259b4866651f38c8269bb74068c4b72c26fa81663a2ac66bcd1 namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.680662479Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.685205686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.686810394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.686830727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:50:45 addons-476000 dockerd[1284]: time="2024-09-18T19:50:45.686886686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 19:50:45 addons-476000 cri-dockerd[1176]: time="2024-09-18T19:50:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/60a3b8cb6671a1b533e1cec793ac0effa58cbdaba41614f23e89e7513c739f98/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	aa5a465d5491d       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  12 seconds ago      Running             hello-world-app            0                   19ecd33288e6e       hello-world-app-55bf9c44b4-kk756
	26f2d7a25232e       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                20 seconds ago      Running             nginx                      0                   906affca1e5b6       nginx
	1c7af5d7a00db       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   60be9d83f22bb       gcp-auth-89d5ffd79-7gh6l
	f7a78de048059       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              patch                      0                   087d4c19807b6       ingress-nginx-admission-patch-9pg76
	6364c6e897bff       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              create                     0                   38419bb62a43f       ingress-nginx-admission-create-wtzpd
	7740a210eac9c       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   9a5f636a45ece       cloud-spanner-emulator-769b77f747-wlp4j
	7c1d57dbaff3e       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   3c50ff19ccb8a       yakd-dashboard-67d98fc6b-xj5dm
	e2833a0cba570       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy             0                   9b03ce7a6878f       registry-proxy-pgsgm
	2f30cb6774344       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   f9f5207ceeb37       local-path-provisioner-86d989889c-b8zt5
	253792d7e3f38       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                   0                   c266cc035c2c9       registry-66c9cd494c-zqh2d
	347094bc34ff9       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   bb93f25f05057       nvidia-device-plugin-daemonset-fmdxx
	788eab1659ddf       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   a3968cdb1fd0d       storage-provisioner
	28245236dfe33       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   59f5c9fe91e27       coredns-7c65d6cfc9-5cssv
	0aebde47980a9       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   31d52dab6aaae       kube-proxy-k82t4
	5f0160d95ab6f       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   0558b1d9ef886       kube-apiserver-addons-476000
	d05f628ff0fb5       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   9a44eeb42a667       kube-controller-manager-addons-476000
	959d9665bd453       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   8d20ccbf7e87a       etcd-addons-476000
	a57de6ad76ac3       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   10ae5bbf99813       kube-scheduler-addons-476000
	
	
	==> coredns [28245236dfe3] <==
	[INFO] 10.244.0.23:49264 - 22065 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035333s
	[INFO] 10.244.0.23:49264 - 57614 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038374s
	[INFO] 10.244.0.23:49264 - 39000 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033208s
	[INFO] 10.244.0.23:49264 - 16658 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036375s
	[INFO] 10.244.0.23:52601 - 46709 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000029417s
	[INFO] 10.244.0.23:52601 - 27089 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014708s
	[INFO] 10.244.0.23:52601 - 61675 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013875s
	[INFO] 10.244.0.23:52601 - 61520 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000019708s
	[INFO] 10.244.0.23:52601 - 33458 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000016584s
	[INFO] 10.244.0.23:52601 - 39703 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000019541s
	[INFO] 10.244.0.23:52601 - 27610 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000016s
	[INFO] 10.244.0.23:52086 - 2155 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047833s
	[INFO] 10.244.0.23:58418 - 32898 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00001425s
	[INFO] 10.244.0.23:58418 - 23932 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015167s
	[INFO] 10.244.0.23:52086 - 57499 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000010875s
	[INFO] 10.244.0.23:58418 - 23645 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012958s
	[INFO] 10.244.0.23:52086 - 52976 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010208s
	[INFO] 10.244.0.23:58418 - 13330 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011625s
	[INFO] 10.244.0.23:52086 - 61843 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00000925s
	[INFO] 10.244.0.23:58418 - 25430 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011291s
	[INFO] 10.244.0.23:52086 - 48457 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010083s
	[INFO] 10.244.0.23:58418 - 32223 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011083s
	[INFO] 10.244.0.23:52086 - 13139 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012458s
	[INFO] 10.244.0.23:58418 - 50613 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000013708s
	[INFO] 10.244.0.23:52086 - 43296 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000027s
	
	
	==> describe nodes <==
	Name:               addons-476000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-476000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=addons-476000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T12_38_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-476000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 19:38:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-476000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 19:50:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 19:50:44 +0000   Wed, 18 Sep 2024 19:38:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 19:50:44 +0000   Wed, 18 Sep 2024 19:38:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 19:50:44 +0000   Wed, 18 Sep 2024 19:38:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 19:50:44 +0000   Wed, 18 Sep 2024 19:38:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-476000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7c3daa6e3874d26bac4b40cb3adaeba
	  System UUID:                f7c3daa6e3874d26bac4b40cb3adaeba
	  Boot ID:                    e9098f71-288c-4637-8914-3720e7468571
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  default                     cloud-spanner-emulator-769b77f747-wlp4j    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-kk756           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  default                     registry-test                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  default                     test-local-path                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  gcp-auth                    gcp-auth-89d5ffd79-7gh6l                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-5cssv                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-476000                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-476000               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-476000      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-k82t4                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-476000               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-fmdxx       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-b8zt5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-xj5dm             0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             298Mi (7%)  426Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-476000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-476000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-476000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-476000 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-476000 event: Registered Node addons-476000 in Controller
	
	
	==> dmesg <==
	[  +5.556519] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.957177] kauditd_printk_skb: 14 callbacks suppressed
	[Sep18 19:39] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.190196] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.372747] kauditd_printk_skb: 7 callbacks suppressed
	[ +30.011558] kauditd_printk_skb: 21 callbacks suppressed
	[Sep18 19:40] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.355927] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.347479] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.777352] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.023953] kauditd_printk_skb: 22 callbacks suppressed
	[Sep18 19:41] kauditd_printk_skb: 6 callbacks suppressed
	[ +19.267442] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.488991] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.800820] kauditd_printk_skb: 2 callbacks suppressed
	[Sep18 19:44] kauditd_printk_skb: 10 callbacks suppressed
	[Sep18 19:49] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.538446] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.275917] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.796274] kauditd_printk_skb: 7 callbacks suppressed
	[Sep18 19:50] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.295810] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.260090] kauditd_printk_skb: 4 callbacks suppressed
	[ +14.626335] kauditd_printk_skb: 13 callbacks suppressed
	[  +9.019901] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [959d9665bd45] <==
	{"level":"info","ts":"2024-09-18T19:38:05.845327Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-476000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T19:38:05.845475Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:05.845500Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:05.845521Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:05.845525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T19:38:05.845993Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T19:38:05.849651Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-18T19:38:05.850024Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T19:38:05.853495Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T19:38:05.853621Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T19:38:05.853655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T19:38:05.853953Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T19:38:21.454223Z","caller":"traceutil/trace.go:171","msg":"trace[689226592] transaction","detail":"{read_only:false; response_revision:876; number_of_response:1; }","duration":"289.940319ms","start":"2024-09-18T19:38:21.164275Z","end":"2024-09-18T19:38:21.454215Z","steps":["trace[689226592] 'process raft request'  (duration: 289.834512ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:38:21.454377Z","caller":"traceutil/trace.go:171","msg":"trace[1720198457] linearizableReadLoop","detail":"{readStateIndex:891; appliedIndex:891; }","duration":"255.34808ms","start":"2024-09-18T19:38:21.199026Z","end":"2024-09-18T19:38:21.454374Z","steps":["trace[1720198457] 'read index received'  (duration: 255.346656ms)","trace[1720198457] 'applied index is now lower than readState.Index'  (duration: 1.18µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-18T19:38:21.454462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.397059ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/volcano-system/volcano-admission-77d7d48b68-nr294.17f66ddbbcab46b7\" ","response":"range_response_count:1 size:841"}
	{"level":"info","ts":"2024-09-18T19:38:21.454479Z","caller":"traceutil/trace.go:171","msg":"trace[1234041105] range","detail":"{range_begin:/registry/events/volcano-system/volcano-admission-77d7d48b68-nr294.17f66ddbbcab46b7; range_end:; response_count:1; response_revision:876; }","duration":"255.453888ms","start":"2024-09-18T19:38:21.199020Z","end":"2024-09-18T19:38:21.454474Z","steps":["trace[1234041105] 'agreement among raft nodes before linearized reading'  (duration: 255.370942ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:38:21.455242Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.683789ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-addons-476000\" ","response":"range_response_count:1 size:4512"}
	{"level":"info","ts":"2024-09-18T19:38:21.455574Z","caller":"traceutil/trace.go:171","msg":"trace[1657758768] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-addons-476000; range_end:; response_count:1; response_revision:877; }","duration":"132.018338ms","start":"2024-09-18T19:38:21.323552Z","end":"2024-09-18T19:38:21.455570Z","steps":["trace[1657758768] 'agreement among raft nodes before linearized reading'  (duration: 131.656534ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:38:29.142975Z","caller":"traceutil/trace.go:171","msg":"trace[679915387] linearizableReadLoop","detail":"{readStateIndex:970; appliedIndex:969; }","duration":"189.81662ms","start":"2024-09-18T19:38:28.953151Z","end":"2024-09-18T19:38:29.142968Z","steps":["trace[679915387] 'read index received'  (duration: 189.751468ms)","trace[679915387] 'applied index is now lower than readState.Index'  (duration: 64.946µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-18T19:38:29.143012Z","caller":"traceutil/trace.go:171","msg":"trace[620590776] transaction","detail":"{read_only:false; response_revision:953; number_of_response:1; }","duration":"227.701827ms","start":"2024-09-18T19:38:28.915308Z","end":"2024-09-18T19:38:29.143010Z","steps":["trace[620590776] 'process raft request'  (duration: 227.616266ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-18T19:38:29.143133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.976581ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-18T19:38:29.143146Z","caller":"traceutil/trace.go:171","msg":"trace[390022826] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:953; }","duration":"189.998516ms","start":"2024-09-18T19:38:28.953144Z","end":"2024-09-18T19:38:29.143142Z","steps":["trace[390022826] 'agreement among raft nodes before linearized reading'  (duration: 189.973687ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-18T19:48:05.994212Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1844}
	{"level":"info","ts":"2024-09-18T19:48:06.087913Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1844,"took":"88.406149ms","hash":1857521070,"current-db-size-bytes":8667136,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4816896,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-18T19:48:06.088540Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1857521070,"revision":1844,"compact-revision":-1}
	
	
	==> gcp-auth [1c7af5d7a00d] <==
	2024/09/18 19:40:55 GCP Auth Webhook started!
	2024/09/18 19:41:11 Ready to marshal response ...
	2024/09/18 19:41:11 Ready to write response ...
	2024/09/18 19:41:12 Ready to marshal response ...
	2024/09/18 19:41:12 Ready to write response ...
	2024/09/18 19:41:34 Ready to marshal response ...
	2024/09/18 19:41:34 Ready to write response ...
	2024/09/18 19:41:34 Ready to marshal response ...
	2024/09/18 19:41:34 Ready to write response ...
	2024/09/18 19:41:34 Ready to marshal response ...
	2024/09/18 19:41:34 Ready to write response ...
	2024/09/18 19:49:36 Ready to marshal response ...
	2024/09/18 19:49:36 Ready to write response ...
	2024/09/18 19:49:45 Ready to marshal response ...
	2024/09/18 19:49:45 Ready to write response ...
	2024/09/18 19:49:52 Ready to marshal response ...
	2024/09/18 19:49:52 Ready to write response ...
	2024/09/18 19:50:22 Ready to marshal response ...
	2024/09/18 19:50:22 Ready to write response ...
	2024/09/18 19:50:31 Ready to marshal response ...
	2024/09/18 19:50:31 Ready to write response ...
	2024/09/18 19:50:40 Ready to marshal response ...
	2024/09/18 19:50:40 Ready to write response ...
	2024/09/18 19:50:40 Ready to marshal response ...
	2024/09/18 19:50:40 Ready to write response ...
	
	
	==> kernel <==
	 19:50:46 up 13 min,  0 users,  load average: 1.34, 0.74, 0.44
	Linux addons-476000 5.10.207 #1 SMP PREEMPT Mon Sep 16 12:01:57 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5f0160d95ab6] <==
	I0918 19:41:24.639012       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0918 19:41:25.336121       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0918 19:41:25.338152       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0918 19:41:25.413529       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0918 19:41:25.430107       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0918 19:41:25.637520       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0918 19:41:25.639185       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0918 19:41:25.647738       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0918 19:49:44.447935       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0918 19:50:06.718557       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:06.718575       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:06.730794       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:06.730818       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:06.744990       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:06.745012       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:50:06.748674       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:50:06.748698       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0918 19:50:07.730946       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0918 19:50:07.749610       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0918 19:50:07.793384       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0918 19:50:17.343226       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0918 19:50:18.353902       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0918 19:50:22.669973       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0918 19:50:22.769153       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.192.25"}
	I0918 19:50:32.003716       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.12.99"}
	
	
	==> kube-controller-manager [d05f628ff0fb] <==
	E0918 19:50:31.594820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:50:31.946621       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.868784ms"
	I0918 19:50:31.950747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="3.315402ms"
	I0918 19:50:31.950939       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="22.458µs"
	I0918 19:50:31.951882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="8.958µs"
	I0918 19:50:32.983969       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="1.5µs"
	I0918 19:50:32.984317       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0918 19:50:32.988382       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0918 19:50:34.874406       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="6.897109ms"
	I0918 19:50:34.874447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="20.875µs"
	W0918 19:50:35.432836       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:50:35.433486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:50:36.829767       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:50:36.829879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:50:38.490337       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:50:38.490436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:50:41.214007       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:50:41.214034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:50:43.155191       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0918 19:50:43.734854       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0918 19:50:43.734894       1 shared_informer.go:320] Caches are synced for resource quota
	I0918 19:50:44.204599       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-476000"
	I0918 19:50:44.234599       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0918 19:50:44.234617       1 shared_informer.go:320] Caches are synced for garbage collector
	I0918 19:50:45.497537       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="2.75µs"
	
	
	==> kube-proxy [0aebde47980a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 19:38:15.073125       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 19:38:15.088909       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0918 19:38:15.088948       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 19:38:15.138076       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 19:38:15.138101       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 19:38:15.138131       1 server_linux.go:169] "Using iptables Proxier"
	I0918 19:38:15.138976       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 19:38:15.139132       1 server.go:483] "Version info" version="v1.31.1"
	I0918 19:38:15.139139       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:38:15.140118       1 config.go:199] "Starting service config controller"
	I0918 19:38:15.140127       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 19:38:15.140137       1 config.go:105] "Starting endpoint slice config controller"
	I0918 19:38:15.140139       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 19:38:15.140342       1 config.go:328] "Starting node config controller"
	I0918 19:38:15.140347       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 19:38:15.241269       1 shared_informer.go:320] Caches are synced for node config
	I0918 19:38:15.241289       1 shared_informer.go:320] Caches are synced for service config
	I0918 19:38:15.241300       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a57de6ad76ac] <==
	E0918 19:38:06.644635       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:06.644432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:38:06.644645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:06.644167       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:38:06.644653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:06.644178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:38:06.644683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:06.644191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 19:38:06.644742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:06.644260       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:38:06.644755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:06.644445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 19:38:06.644766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:06.644493       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:38:06.644799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:06.644505       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 19:38:06.644811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0918 19:38:06.644968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:07.465873       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:38:07.466156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:07.476827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 19:38:07.476946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:07.715940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:38:07.715999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0918 19:38:07.842482       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 19:50:44 addons-476000 kubelet[2055]: I0918 19:50:44.278044    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cedb800-862a-40a3-accd-a06d13818e47-kube-api-access-6bg6c" (OuterVolumeSpecName: "kube-api-access-6bg6c") pod "8cedb800-862a-40a3-accd-a06d13818e47" (UID: "8cedb800-862a-40a3-accd-a06d13818e47"). InnerVolumeSpecName "kube-api-access-6bg6c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:50:44 addons-476000 kubelet[2055]: I0918 19:50:44.374939    2055 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8cedb800-862a-40a3-accd-a06d13818e47-gcp-creds\") on node \"addons-476000\" DevicePath \"\""
	Sep 18 19:50:44 addons-476000 kubelet[2055]: I0918 19:50:44.374966    2055 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6bg6c\" (UniqueName: \"kubernetes.io/projected/8cedb800-862a-40a3-accd-a06d13818e47-kube-api-access-6bg6c\") on node \"addons-476000\" DevicePath \"\""
	Sep 18 19:50:44 addons-476000 kubelet[2055]: I0918 19:50:44.374973    2055 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/8cedb800-862a-40a3-accd-a06d13818e47-data\") on node \"addons-476000\" DevicePath \"\""
	Sep 18 19:50:44 addons-476000 kubelet[2055]: I0918 19:50:44.374977    2055 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/8cedb800-862a-40a3-accd-a06d13818e47-script\") on node \"addons-476000\" DevicePath \"\""
	Sep 18 19:50:44 addons-476000 kubelet[2055]: E0918 19:50:44.697955    2055 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="fb05452d-0906-472a-b5c0-dda626bf3067"
	Sep 18 19:50:44 addons-476000 kubelet[2055]: I0918 19:50:44.705595    2055 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cedb800-862a-40a3-accd-a06d13818e47" path="/var/lib/kubelet/pods/8cedb800-862a-40a3-accd-a06d13818e47/volumes"
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.152939    2055 scope.go:117] "RemoveContainer" containerID="f53a1464e18f507736209d7d9dac9093b1b00d4993882689c19a33ba1ae76c09"
	Sep 18 19:50:45 addons-476000 kubelet[2055]: E0918 19:50:45.338129    2055 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8cedb800-862a-40a3-accd-a06d13818e47" containerName="helper-pod"
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.338163    2055 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cedb800-862a-40a3-accd-a06d13818e47" containerName="helper-pod"
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.483835    2055 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9gbw\" (UniqueName: \"kubernetes.io/projected/7fdf125a-b9f7-4506-904b-345d59683639-kube-api-access-t9gbw\") pod \"test-local-path\" (UID: \"7fdf125a-b9f7-4506-904b-345d59683639\") " pod="default/test-local-path"
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.483860    2055 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9102924f-1203-4dc1-93dd-9133f9ce5121\" (UniqueName: \"kubernetes.io/host-path/7fdf125a-b9f7-4506-904b-345d59683639-pvc-9102924f-1203-4dc1-93dd-9133f9ce5121\") pod \"test-local-path\" (UID: \"7fdf125a-b9f7-4506-904b-345d59683639\") " pod="default/test-local-path"
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.483872    2055 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7fdf125a-b9f7-4506-904b-345d59683639-gcp-creds\") pod \"test-local-path\" (UID: \"7fdf125a-b9f7-4506-904b-345d59683639\") " pod="default/test-local-path"
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.584378    2055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9pbh\" (UniqueName: \"kubernetes.io/projected/5440b68a-c592-4973-a396-d6669ec4a295-kube-api-access-g9pbh\") pod \"5440b68a-c592-4973-a396-d6669ec4a295\" (UID: \"5440b68a-c592-4973-a396-d6669ec4a295\") "
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.584400    2055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5440b68a-c592-4973-a396-d6669ec4a295-gcp-creds\") pod \"5440b68a-c592-4973-a396-d6669ec4a295\" (UID: \"5440b68a-c592-4973-a396-d6669ec4a295\") "
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.584649    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5440b68a-c592-4973-a396-d6669ec4a295-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5440b68a-c592-4973-a396-d6669ec4a295" (UID: "5440b68a-c592-4973-a396-d6669ec4a295"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.590294    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5440b68a-c592-4973-a396-d6669ec4a295-kube-api-access-g9pbh" (OuterVolumeSpecName: "kube-api-access-g9pbh") pod "5440b68a-c592-4973-a396-d6669ec4a295" (UID: "5440b68a-c592-4973-a396-d6669ec4a295"). InnerVolumeSpecName "kube-api-access-g9pbh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.685380    2055 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5440b68a-c592-4973-a396-d6669ec4a295-gcp-creds\") on node \"addons-476000\" DevicePath \"\""
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.685394    2055 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-g9pbh\" (UniqueName: \"kubernetes.io/projected/5440b68a-c592-4973-a396-d6669ec4a295-kube-api-access-g9pbh\") on node \"addons-476000\" DevicePath \"\""
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.785765    2055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm9q7\" (UniqueName: \"kubernetes.io/projected/2cd0b0e2-c98a-477c-9973-6e010d122199-kube-api-access-mm9q7\") pod \"2cd0b0e2-c98a-477c-9973-6e010d122199\" (UID: \"2cd0b0e2-c98a-477c-9973-6e010d122199\") "
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.785782    2055 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mln7z\" (UniqueName: \"kubernetes.io/projected/733c8b1c-39a5-4634-92d4-ac15f0f79484-kube-api-access-mln7z\") pod \"733c8b1c-39a5-4634-92d4-ac15f0f79484\" (UID: \"733c8b1c-39a5-4634-92d4-ac15f0f79484\") "
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.786594    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/733c8b1c-39a5-4634-92d4-ac15f0f79484-kube-api-access-mln7z" (OuterVolumeSpecName: "kube-api-access-mln7z") pod "733c8b1c-39a5-4634-92d4-ac15f0f79484" (UID: "733c8b1c-39a5-4634-92d4-ac15f0f79484"). InnerVolumeSpecName "kube-api-access-mln7z". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.786674    2055 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd0b0e2-c98a-477c-9973-6e010d122199-kube-api-access-mm9q7" (OuterVolumeSpecName: "kube-api-access-mm9q7") pod "2cd0b0e2-c98a-477c-9973-6e010d122199" (UID: "2cd0b0e2-c98a-477c-9973-6e010d122199"). InnerVolumeSpecName "kube-api-access-mm9q7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.886098    2055 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mm9q7\" (UniqueName: \"kubernetes.io/projected/2cd0b0e2-c98a-477c-9973-6e010d122199-kube-api-access-mm9q7\") on node \"addons-476000\" DevicePath \"\""
	Sep 18 19:50:45 addons-476000 kubelet[2055]: I0918 19:50:45.886114    2055 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mln7z\" (UniqueName: \"kubernetes.io/projected/733c8b1c-39a5-4634-92d4-ac15f0f79484-kube-api-access-mln7z\") on node \"addons-476000\" DevicePath \"\""
	
	
	==> storage-provisioner [788eab1659dd] <==
	I0918 19:38:18.641747       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:38:18.767734       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:38:18.767759       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:38:18.843962       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:38:18.844056       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-476000_49c72e17-4ac7-45ea-81b7-d7f9c50de518!
	I0918 19:38:18.844484       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3177c609-a03a-4afc-be4d-5bb76ab912fd", APIVersion:"v1", ResourceVersion:"772", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-476000_49c72e17-4ac7-45ea-81b7-d7f9c50de518 became leader
	I0918 19:38:18.945018       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-476000_49c72e17-4ac7-45ea-81b7-d7f9c50de518!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-476000 -n addons-476000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-476000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-476000 describe pod busybox test-local-path
helpers_test.go:282: (dbg) kubectl --context addons-476000 describe pod busybox test-local-path:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-476000/192.168.105.2
	Start Time:       Wed, 18 Sep 2024 12:41:34 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pzphc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pzphc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m12s                  default-scheduler  Successfully assigned default/busybox to addons-476000
	  Normal   Pulling    7m45s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m45s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m45s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m19s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m5s (x20 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-476000/192.168.105.2
	Start Time:       Wed, 18 Sep 2024 12:50:45 -0700
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t9gbw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-t9gbw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/test-local-path to addons-476000
	  Normal  Pulling    1s    kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.27s)

                                                
                                    
x
+
TestCertOptions (12.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-958000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-958000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (11.935254708s)

                                                
                                                
-- stdout --
	* [cert-options-958000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-958000" primary control-plane node in "cert-options-958000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-958000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-958000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-958000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-958000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-958000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (81.257875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-958000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-958000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-958000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-958000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-958000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-958000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.227584ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-958000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-958000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-958000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-958000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-958000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-18 13:32:38.180549 -0700 PDT m=+3329.285039251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-958000 -n cert-options-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-958000 -n cert-options-958000: exit status 7 (29.408292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-958000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-958000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-958000
--- FAIL: TestCertOptions (12.20s)

                                                
                                    
x
+
TestCertExpiration (199.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-319000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-319000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.410938167s)

                                                
                                                
-- stdout --
	* [cert-expiration-319000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-319000" primary control-plane node in "cert-expiration-319000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-319000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-319000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-319000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-319000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-319000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (6.713204167s)

                                                
                                                
-- stdout --
	* [cert-expiration-319000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-319000" primary control-plane node in "cert-expiration-319000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-319000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-319000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-319000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-319000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-319000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-319000" primary control-plane node in "cert-expiration-319000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-319000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-319000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-319000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-18 13:35:35.052824 -0700 PDT m=+3506.107785876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-319000 -n cert-expiration-319000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-319000 -n cert-expiration-319000: exit status 7 (53.052625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-319000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-319000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-319000
--- FAIL: TestCertExpiration (199.29s)

                                                
                                    
x
+
TestDockerFlags (10.1s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-669000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
W0918 13:32:06.960936    1516 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0918 13:32:06.961185    1516 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0918 13:32:06.961229    1516 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/001/docker-machine-driver-hyperkit
I0918 13:32:07.475383    1516 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x108b9ed40 0x108b9ed40 0x108b9ed40 0x108b9ed40 0x108b9ed40 0x108b9ed40 0x108b9ed40] Decompressors:map[bz2:0x1400000e4d0 gz:0x1400000e4d8 tar:0x1400000e480 tar.bz2:0x1400000e490 tar.gz:0x1400000e4a0 tar.xz:0x1400000e4b0 tar.zst:0x1400000e4c0 tbz2:0x1400000e490 tgz:0x1400000e4a0 txz:0x1400000e4b0 tzst:0x1400000e4c0 xz:0x1400000e4e0 zip:0x1400000e4f0 zst:0x1400000e4e8] Getters:map[file:0x140014603c0 http:0x1400052d590 https:0x1400052d5e0] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0918 13:32:07.475514    1516 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/001/docker-machine-driver-hyperkit
I0918 13:32:10.770722    1516 install.go:79] stdout: 
W0918 13:32:10.770907    1516 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0918 13:32:10.770929    1516 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/001/docker-machine-driver-hyperkit]
I0918 13:32:10.786127    1516 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/001/docker-machine-driver-hyperkit]
I0918 13:32:10.797453    1516 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/001/docker-machine-driver-hyperkit]
I0918 13:32:10.806670    1516 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/001/docker-machine-driver-hyperkit]
I0918 13:32:10.823035    1516 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0918 13:32:10.823154    1516 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I0918 13:32:12.607149    1516 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0918 13:32:12.607167    1516 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0918 13:32:12.607214    1516 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0918 13:32:12.607269    1516 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/002/docker-machine-driver-hyperkit
I0918 13:32:13.015859    1516 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x108b9ed40 0x108b9ed40 0x108b9ed40 0x108b9ed40 0x108b9ed40 0x108b9ed40 0x108b9ed40] Decompressors:map[bz2:0x1400000e4d0 gz:0x1400000e4d8 tar:0x1400000e480 tar.bz2:0x1400000e490 tar.gz:0x1400000e4a0 tar.xz:0x1400000e4b0 tar.zst:0x1400000e4c0 tbz2:0x1400000e490 tgz:0x1400000e4a0 txz:0x1400000e4b0 tzst:0x1400000e4c0 xz:0x1400000e4e0 zip:0x1400000e4f0 zst:0x1400000e4e8] Getters:map[file:0x140005022b0 http:0x14000634730 https:0x14000634780] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0918 13:32:13.016026    1516 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/002/docker-machine-driver-hyperkit
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-669000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.856565334s)

                                                
                                                
-- stdout --
	* [docker-flags-669000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-669000" primary control-plane node in "docker-flags-669000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-669000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:32:05.715651    4727 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:32:05.715789    4727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:05.715792    4727 out.go:358] Setting ErrFile to fd 2...
	I0918 13:32:05.715795    4727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:05.715930    4727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:32:05.716960    4727 out.go:352] Setting JSON to false
	I0918 13:32:05.733828    4727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3684,"bootTime":1726687841,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:32:05.733920    4727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:32:05.748140    4727 out.go:177] * [docker-flags-669000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:32:05.757174    4727 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:32:05.757183    4727 notify.go:220] Checking for updates...
	I0918 13:32:05.779123    4727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:32:05.793146    4727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:32:05.798135    4727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:32:05.801174    4727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:32:05.804065    4727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:32:05.807445    4727 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:32:05.807485    4727 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:32:05.811083    4727 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:32:05.818130    4727 start.go:297] selected driver: qemu2
	I0918 13:32:05.818136    4727 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:32:05.818141    4727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:32:05.820540    4727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:32:05.823111    4727 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:32:05.827145    4727 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0918 13:32:05.827160    4727 cni.go:84] Creating CNI manager for ""
	I0918 13:32:05.827180    4727 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:32:05.827187    4727 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:32:05.827213    4727 start.go:340] cluster config:
	{Name:docker-flags-669000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-669000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:32:05.830551    4727 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:05.838081    4727 out.go:177] * Starting "docker-flags-669000" primary control-plane node in "docker-flags-669000" cluster
	I0918 13:32:05.842087    4727 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:32:05.842100    4727 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:32:05.842105    4727 cache.go:56] Caching tarball of preloaded images
	I0918 13:32:05.842153    4727 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:32:05.842158    4727 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:32:05.842209    4727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/docker-flags-669000/config.json ...
	I0918 13:32:05.842218    4727 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/docker-flags-669000/config.json: {Name:mk00dbd97af5c5186763745400075f95fc40a35b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:32:05.842505    4727 start.go:360] acquireMachinesLock for docker-flags-669000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:32:05.842535    4727 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "docker-flags-669000"
	I0918 13:32:05.842543    4727 start.go:93] Provisioning new machine with config: &{Name:docker-flags-669000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-669000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:32:05.842585    4727 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:32:05.850068    4727 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 13:32:05.866093    4727 start.go:159] libmachine.API.Create for "docker-flags-669000" (driver="qemu2")
	I0918 13:32:05.866126    4727 client.go:168] LocalClient.Create starting
	I0918 13:32:05.866190    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:32:05.866223    4727 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:05.866231    4727 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:05.866271    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:32:05.866294    4727 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:05.866303    4727 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:05.866718    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:32:06.049151    4727 main.go:141] libmachine: Creating SSH key...
	I0918 13:32:06.087333    4727 main.go:141] libmachine: Creating Disk image...
	I0918 13:32:06.087342    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:32:06.087531    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2
	I0918 13:32:06.097340    4727 main.go:141] libmachine: STDOUT: 
	I0918 13:32:06.097356    4727 main.go:141] libmachine: STDERR: 
	I0918 13:32:06.097415    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2 +20000M
	I0918 13:32:06.105847    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:32:06.105878    4727 main.go:141] libmachine: STDERR: 
	I0918 13:32:06.105896    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2
	I0918 13:32:06.105903    4727 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:32:06.105915    4727 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:32:06.105944    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:21:9e:53:10:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2
	I0918 13:32:06.107681    4727 main.go:141] libmachine: STDOUT: 
	I0918 13:32:06.107695    4727 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:32:06.107719    4727 client.go:171] duration metric: took 241.590958ms to LocalClient.Create
	I0918 13:32:08.109894    4727 start.go:128] duration metric: took 2.267326667s to createHost
	I0918 13:32:08.109956    4727 start.go:83] releasing machines lock for "docker-flags-669000", held for 2.267472416s
	W0918 13:32:08.110025    4727 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:08.128130    4727 out.go:177] * Deleting "docker-flags-669000" in qemu2 ...
	W0918 13:32:08.160546    4727 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:08.160569    4727 start.go:729] Will try again in 5 seconds ...
	I0918 13:32:13.162728    4727 start.go:360] acquireMachinesLock for docker-flags-669000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:32:13.163195    4727 start.go:364] duration metric: took 367.875µs to acquireMachinesLock for "docker-flags-669000"
	I0918 13:32:13.163332    4727 start.go:93] Provisioning new machine with config: &{Name:docker-flags-669000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-669000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:32:13.163617    4727 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:32:13.185599    4727 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 13:32:13.235592    4727 start.go:159] libmachine.API.Create for "docker-flags-669000" (driver="qemu2")
	I0918 13:32:13.235652    4727 client.go:168] LocalClient.Create starting
	I0918 13:32:13.235766    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:32:13.235819    4727 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:13.235831    4727 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:13.235892    4727 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:32:13.235931    4727 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:13.235944    4727 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:13.236496    4727 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:32:13.406511    4727 main.go:141] libmachine: Creating SSH key...
	I0918 13:32:13.472999    4727 main.go:141] libmachine: Creating Disk image...
	I0918 13:32:13.473007    4727 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:32:13.473188    4727 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2
	I0918 13:32:13.482428    4727 main.go:141] libmachine: STDOUT: 
	I0918 13:32:13.482450    4727 main.go:141] libmachine: STDERR: 
	I0918 13:32:13.482509    4727 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2 +20000M
	I0918 13:32:13.490599    4727 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:32:13.490616    4727 main.go:141] libmachine: STDERR: 
	I0918 13:32:13.490627    4727 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2
	I0918 13:32:13.490632    4727 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:32:13.490641    4727 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:32:13.490682    4727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:4c:50:e6:fa:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/docker-flags-669000/disk.qcow2
	I0918 13:32:13.492366    4727 main.go:141] libmachine: STDOUT: 
	I0918 13:32:13.492381    4727 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:32:13.492404    4727 client.go:171] duration metric: took 256.75375ms to LocalClient.Create
	I0918 13:32:15.494544    4727 start.go:128] duration metric: took 2.330941709s to createHost
	I0918 13:32:15.494608    4727 start.go:83] releasing machines lock for "docker-flags-669000", held for 2.331452125s
	W0918 13:32:15.494985    4727 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-669000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-669000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:15.511177    4727 out.go:201] 
	W0918 13:32:15.516362    4727 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:32:15.516430    4727 out.go:270] * 
	* 
	W0918 13:32:15.519388    4727 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:32:15.529202    4727 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-669000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-669000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-669000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (83.531ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-669000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-669000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-669000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-669000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-669000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-669000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-669000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-669000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-669000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (49.740083ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-669000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-669000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-669000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-669000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-669000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-669000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-18 13:32:15.678187 -0700 PDT m=+3306.782087293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-669000 -n docker-flags-669000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-669000 -n docker-flags-669000: exit status 7 (30.61525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-669000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-669000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-669000
--- FAIL: TestDockerFlags (10.10s)

                                                
                                    
x
+
TestForceSystemdFlag (10.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-926000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
I0918 13:32:15.827013    1516 install.go:79] stdout: 
W0918 13:32:15.827124    1516 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0918 13:32:15.827141    1516 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/002/docker-machine-driver-hyperkit]
I0918 13:32:15.836687    1516 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/002/docker-machine-driver-hyperkit]
I0918 13:32:15.847461    1516 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/002/docker-machine-driver-hyperkit]
I0918 13:32:15.858924    1516 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate403892887/002/docker-machine-driver-hyperkit]
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-926000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.09680325s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-926000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-926000" primary control-plane node in "force-systemd-flag-926000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-926000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:32:15.817447    4770 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:32:15.817585    4770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:15.817592    4770 out.go:358] Setting ErrFile to fd 2...
	I0918 13:32:15.817595    4770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:15.817740    4770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:32:15.819022    4770 out.go:352] Setting JSON to false
	I0918 13:32:15.836614    4770 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3694,"bootTime":1726687841,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:32:15.836684    4770 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:32:15.842587    4770 out.go:177] * [force-systemd-flag-926000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:32:15.850490    4770 notify.go:220] Checking for updates...
	I0918 13:32:15.854371    4770 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:32:15.862386    4770 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:32:15.870263    4770 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:32:15.880376    4770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:32:15.885448    4770 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:32:15.890460    4770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:32:15.895887    4770 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:32:15.895944    4770 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:32:15.899353    4770 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:32:15.906435    4770 start.go:297] selected driver: qemu2
	I0918 13:32:15.906444    4770 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:32:15.906453    4770 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:32:15.908690    4770 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:32:15.912368    4770 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:32:15.915477    4770 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 13:32:15.915494    4770 cni.go:84] Creating CNI manager for ""
	I0918 13:32:15.915521    4770 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:32:15.915525    4770 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:32:15.915558    4770 start.go:340] cluster config:
	{Name:force-systemd-flag-926000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:32:15.918964    4770 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:15.923263    4770 out.go:177] * Starting "force-systemd-flag-926000" primary control-plane node in "force-systemd-flag-926000" cluster
	I0918 13:32:15.931403    4770 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:32:15.931416    4770 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:32:15.931442    4770 cache.go:56] Caching tarball of preloaded images
	I0918 13:32:15.931505    4770 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:32:15.931510    4770 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:32:15.931555    4770 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/force-systemd-flag-926000/config.json ...
	I0918 13:32:15.931568    4770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/force-systemd-flag-926000/config.json: {Name:mkf72bed38045755317aa8fbc1d9dfdf4007e819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:32:15.936913    4770 start.go:360] acquireMachinesLock for force-systemd-flag-926000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:32:15.936968    4770 start.go:364] duration metric: took 43.542µs to acquireMachinesLock for "force-systemd-flag-926000"
	I0918 13:32:15.936981    4770 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:32:15.937006    4770 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:32:15.944382    4770 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 13:32:15.960592    4770 start.go:159] libmachine.API.Create for "force-systemd-flag-926000" (driver="qemu2")
	I0918 13:32:15.960621    4770 client.go:168] LocalClient.Create starting
	I0918 13:32:15.960676    4770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:32:15.960706    4770 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:15.960716    4770 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:15.960752    4770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:32:15.960775    4770 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:15.960784    4770 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:15.961125    4770 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:32:16.182560    4770 main.go:141] libmachine: Creating SSH key...
	I0918 13:32:16.251294    4770 main.go:141] libmachine: Creating Disk image...
	I0918 13:32:16.251300    4770 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:32:16.251463    4770 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2
	I0918 13:32:16.260781    4770 main.go:141] libmachine: STDOUT: 
	I0918 13:32:16.260802    4770 main.go:141] libmachine: STDERR: 
	I0918 13:32:16.260863    4770 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2 +20000M
	I0918 13:32:16.268784    4770 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:32:16.268799    4770 main.go:141] libmachine: STDERR: 
	I0918 13:32:16.268816    4770 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2
	I0918 13:32:16.268821    4770 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:32:16.268833    4770 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:32:16.268863    4770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:4c:9c:59:77:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2
	I0918 13:32:16.270472    4770 main.go:141] libmachine: STDOUT: 
	I0918 13:32:16.270487    4770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:32:16.270508    4770 client.go:171] duration metric: took 309.8875ms to LocalClient.Create
	I0918 13:32:18.272777    4770 start.go:128] duration metric: took 2.335777792s to createHost
	I0918 13:32:18.272894    4770 start.go:83] releasing machines lock for "force-systemd-flag-926000", held for 2.335976s
	W0918 13:32:18.272943    4770 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:18.300070    4770 out.go:177] * Deleting "force-systemd-flag-926000" in qemu2 ...
	W0918 13:32:18.326964    4770 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:18.326987    4770 start.go:729] Will try again in 5 seconds ...
	I0918 13:32:23.328038    4770 start.go:360] acquireMachinesLock for force-systemd-flag-926000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:32:23.328541    4770 start.go:364] duration metric: took 370.417µs to acquireMachinesLock for "force-systemd-flag-926000"
	I0918 13:32:23.328702    4770 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-926000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:32:23.328984    4770 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:32:23.335816    4770 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 13:32:23.386854    4770 start.go:159] libmachine.API.Create for "force-systemd-flag-926000" (driver="qemu2")
	I0918 13:32:23.386903    4770 client.go:168] LocalClient.Create starting
	I0918 13:32:23.387016    4770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:32:23.387078    4770 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:23.387096    4770 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:23.387165    4770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:32:23.387211    4770 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:23.387225    4770 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:23.387864    4770 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:32:23.558828    4770 main.go:141] libmachine: Creating SSH key...
	I0918 13:32:23.811828    4770 main.go:141] libmachine: Creating Disk image...
	I0918 13:32:23.811838    4770 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:32:23.812079    4770 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2
	I0918 13:32:23.822126    4770 main.go:141] libmachine: STDOUT: 
	I0918 13:32:23.822144    4770 main.go:141] libmachine: STDERR: 
	I0918 13:32:23.822197    4770 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2 +20000M
	I0918 13:32:23.830223    4770 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:32:23.830237    4770 main.go:141] libmachine: STDERR: 
	I0918 13:32:23.830257    4770 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2
	I0918 13:32:23.830262    4770 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:32:23.830272    4770 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:32:23.830300    4770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:ef:4f:38:99:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-flag-926000/disk.qcow2
	I0918 13:32:23.831876    4770 main.go:141] libmachine: STDOUT: 
	I0918 13:32:23.831891    4770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:32:23.831913    4770 client.go:171] duration metric: took 445.014667ms to LocalClient.Create
	I0918 13:32:25.834027    4770 start.go:128] duration metric: took 2.505076541s to createHost
	I0918 13:32:25.834072    4770 start.go:83] releasing machines lock for "force-systemd-flag-926000", held for 2.505571875s
	W0918 13:32:25.834336    4770 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-926000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-926000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:25.853065    4770 out.go:201] 
	W0918 13:32:25.857159    4770 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:32:25.857209    4770 out.go:270] * 
	* 
	W0918 13:32:25.860266    4770 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:32:25.878011    4770 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-926000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-926000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-926000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (71.629459ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-926000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-926000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-926000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-18 13:32:25.957351 -0700 PDT m=+3317.061521001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-926000 -n force-systemd-flag-926000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-926000 -n force-systemd-flag-926000: exit status 7 (35.867334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-926000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-926000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-926000
--- FAIL: TestForceSystemdFlag (10.30s)

                                                
                                    
x
+
TestForceSystemdEnv (9.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-165000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-165000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.798142333s)

                                                
                                                
-- stdout --
	* [force-systemd-env-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-165000" primary control-plane node in "force-systemd-env-165000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-165000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:31:49.825648    4582 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:31:49.825798    4582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:31:49.825804    4582 out.go:358] Setting ErrFile to fd 2...
	I0918 13:31:49.825807    4582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:31:49.825924    4582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:31:49.827097    4582 out.go:352] Setting JSON to false
	I0918 13:31:49.843398    4582 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3668,"bootTime":1726687841,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:31:49.843469    4582 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:31:49.850300    4582 out.go:177] * [force-systemd-env-165000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:31:49.858081    4582 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:31:49.858150    4582 notify.go:220] Checking for updates...
	I0918 13:31:49.864129    4582 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:31:49.865747    4582 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:31:49.869152    4582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:31:49.872197    4582 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:31:49.875156    4582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0918 13:31:49.878576    4582 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:31:49.878637    4582 config.go:182] Loaded profile config "stopped-upgrade-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0918 13:31:49.878683    4582 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:31:49.883169    4582 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:31:49.890268    4582 start.go:297] selected driver: qemu2
	I0918 13:31:49.890275    4582 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:31:49.890280    4582 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:31:49.892425    4582 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:31:49.895129    4582 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:31:49.898216    4582 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 13:31:49.898230    4582 cni.go:84] Creating CNI manager for ""
	I0918 13:31:49.898249    4582 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:31:49.898258    4582 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:31:49.898284    4582 start.go:340] cluster config:
	{Name:force-systemd-env-165000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:31:49.901825    4582 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:31:49.905114    4582 out.go:177] * Starting "force-systemd-env-165000" primary control-plane node in "force-systemd-env-165000" cluster
	I0918 13:31:49.912147    4582 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:31:49.912162    4582 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:31:49.912170    4582 cache.go:56] Caching tarball of preloaded images
	I0918 13:31:49.912232    4582 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:31:49.912238    4582 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:31:49.912306    4582 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/force-systemd-env-165000/config.json ...
	I0918 13:31:49.912318    4582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/force-systemd-env-165000/config.json: {Name:mkcd91d189d606f67791aa8616cbff637fc28aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:31:49.912538    4582 start.go:360] acquireMachinesLock for force-systemd-env-165000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:31:49.912570    4582 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "force-systemd-env-165000"
	I0918 13:31:49.912580    4582 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:31:49.912601    4582 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:31:49.919110    4582 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 13:31:49.934403    4582 start.go:159] libmachine.API.Create for "force-systemd-env-165000" (driver="qemu2")
	I0918 13:31:49.934434    4582 client.go:168] LocalClient.Create starting
	I0918 13:31:49.934494    4582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:31:49.934524    4582 main.go:141] libmachine: Decoding PEM data...
	I0918 13:31:49.934533    4582 main.go:141] libmachine: Parsing certificate...
	I0918 13:31:49.934574    4582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:31:49.934597    4582 main.go:141] libmachine: Decoding PEM data...
	I0918 13:31:49.934605    4582 main.go:141] libmachine: Parsing certificate...
	I0918 13:31:49.934940    4582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:31:50.093522    4582 main.go:141] libmachine: Creating SSH key...
	I0918 13:31:50.181046    4582 main.go:141] libmachine: Creating Disk image...
	I0918 13:31:50.181053    4582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:31:50.181224    4582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2
	I0918 13:31:50.190497    4582 main.go:141] libmachine: STDOUT: 
	I0918 13:31:50.190513    4582 main.go:141] libmachine: STDERR: 
	I0918 13:31:50.190580    4582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2 +20000M
	I0918 13:31:50.198642    4582 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:31:50.198658    4582 main.go:141] libmachine: STDERR: 
	I0918 13:31:50.198671    4582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2
	I0918 13:31:50.198675    4582 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:31:50.198703    4582 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:31:50.198732    4582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:d0:45:86:d6:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2
	I0918 13:31:50.200333    4582 main.go:141] libmachine: STDOUT: 
	I0918 13:31:50.200347    4582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:31:50.200374    4582 client.go:171] duration metric: took 265.940291ms to LocalClient.Create
	I0918 13:31:52.202437    4582 start.go:128] duration metric: took 2.289882583s to createHost
	I0918 13:31:52.202482    4582 start.go:83] releasing machines lock for "force-systemd-env-165000", held for 2.289964583s
	W0918 13:31:52.202511    4582 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:31:52.207941    4582 out.go:177] * Deleting "force-systemd-env-165000" in qemu2 ...
	W0918 13:31:52.237764    4582 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:31:52.237785    4582 start.go:729] Will try again in 5 seconds ...
	I0918 13:31:57.239930    4582 start.go:360] acquireMachinesLock for force-systemd-env-165000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:31:57.240405    4582 start.go:364] duration metric: took 364.667µs to acquireMachinesLock for "force-systemd-env-165000"
	I0918 13:31:57.240517    4582 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-165000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-165000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:31:57.240660    4582 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:31:57.250137    4582 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0918 13:31:57.291735    4582 start.go:159] libmachine.API.Create for "force-systemd-env-165000" (driver="qemu2")
	I0918 13:31:57.291787    4582 client.go:168] LocalClient.Create starting
	I0918 13:31:57.291898    4582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:31:57.291967    4582 main.go:141] libmachine: Decoding PEM data...
	I0918 13:31:57.291986    4582 main.go:141] libmachine: Parsing certificate...
	I0918 13:31:57.292039    4582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:31:57.292079    4582 main.go:141] libmachine: Decoding PEM data...
	I0918 13:31:57.292098    4582 main.go:141] libmachine: Parsing certificate...
	I0918 13:31:57.292531    4582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:31:57.458193    4582 main.go:141] libmachine: Creating SSH key...
	I0918 13:31:57.533758    4582 main.go:141] libmachine: Creating Disk image...
	I0918 13:31:57.533766    4582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:31:57.533964    4582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2
	I0918 13:31:57.543420    4582 main.go:141] libmachine: STDOUT: 
	I0918 13:31:57.543436    4582 main.go:141] libmachine: STDERR: 
	I0918 13:31:57.543506    4582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2 +20000M
	I0918 13:31:57.551339    4582 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:31:57.551356    4582 main.go:141] libmachine: STDERR: 
	I0918 13:31:57.551368    4582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2
	I0918 13:31:57.551377    4582 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:31:57.551386    4582 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:31:57.551419    4582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:86:5c:e3:cc:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/force-systemd-env-165000/disk.qcow2
	I0918 13:31:57.553138    4582 main.go:141] libmachine: STDOUT: 
	I0918 13:31:57.553152    4582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:31:57.553165    4582 client.go:171] duration metric: took 261.376042ms to LocalClient.Create
	I0918 13:31:59.555190    4582 start.go:128] duration metric: took 2.314563834s to createHost
	I0918 13:31:59.555219    4582 start.go:83] releasing machines lock for "force-systemd-env-165000", held for 2.314862s
	W0918 13:31:59.555318    4582 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-165000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:31:59.564443    4582 out.go:201] 
	W0918 13:31:59.573413    4582 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:31:59.573420    4582 out.go:270] * 
	* 
	W0918 13:31:59.574112    4582 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:31:59.586407    4582 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-165000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-165000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-165000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (50.448083ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-165000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-165000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-165000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-18 13:31:59.64746 -0700 PDT m=+3290.750940668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-165000 -n force-systemd-env-165000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-165000 -n force-systemd-env-165000: exit status 7 (30.065625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-165000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-165000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-165000
--- FAIL: TestForceSystemdEnv (9.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-815000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-815000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-ghjvd" [7068915e-77da-4fac-9fa2-47a300dd8850] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-ghjvd" [7068915e-77da-4fac-9fa2-47a300dd8850] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008719s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30785
functional_test.go:1661: error fetching http://192.168.105.4:30785: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
I0918 12:59:50.079440    1516 retry.go:31] will retry after 1.456767444s: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30785: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
I0918 12:59:51.540188    1516 retry.go:31] will retry after 2.055430551s: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30785: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
I0918 12:59:53.599353    1516 retry.go:31] will retry after 3.320945407s: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30785: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
I0918 12:59:56.923127    1516 retry.go:31] will retry after 4.732854285s: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30785: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
I0918 13:00:01.659323    1516 retry.go:31] will retry after 5.946455851s: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30785: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
I0918 13:00:07.608272    1516 retry.go:31] will retry after 9.41540719s: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30785: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30785: Get "http://192.168.105.4:30785": dial tcp 192.168.105.4:30785: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-815000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-ghjvd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-815000/192.168.105.4
Start Time:       Wed, 18 Sep 2024 12:59:42 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://1d21a2f3b546de3d6ecc6dfe554eb53a3f39ee031d76999a7896c60cfd92d46d
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 18 Sep 2024 12:59:55 -0700
Finished:     Wed, 18 Sep 2024 12:59:55 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5knv9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-5knv9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-ghjvd to functional-815000
Normal   Pulled     23s (x3 over 35s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    22s (x3 over 35s)  kubelet            Created container echoserver-arm
Normal   Started    22s (x3 over 35s)  kubelet            Started container echoserver-arm
Warning  BackOff    11s (x3 over 33s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-ghjvd_default(7068915e-77da-4fac-9fa2-47a300dd8850)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-815000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-815000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.205.70
IPs:                      10.110.205.70
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30785/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-815000 -n functional-815000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-815000 ssh findmnt                                                                                       | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh -- ls                                                                                         | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh cat                                                                                           | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | /mount-9p/test-1726689603119464000                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh stat                                                                                          | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh stat                                                                                          | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh sudo                                                                                          | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh findmnt                                                                                       | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-815000                                                                                                | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port779639000/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh findmnt                                                                                       | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh -- ls                                                                                         | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh sudo                                                                                          | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-815000                                                                                                | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187698954/001:/mount1  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-815000                                                                                                | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187698954/001:/mount3  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-815000                                                                                                | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187698954/001:/mount2  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh findmnt                                                                                       | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh findmnt                                                                                       | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh findmnt                                                                                       | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh findmnt                                                                                       | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh findmnt                                                                                       | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-815000 ssh findmnt                                                                                       | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT | 18 Sep 24 13:00 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-815000                                                                                                | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-815000                                                                                                | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-815000 --dry-run                                                                                      | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-815000                                                                                                | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-815000 | jenkins | v1.34.0 | 18 Sep 24 13:00 PDT |                     |
	|           | -p functional-815000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 13:00:11
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 13:00:11.089604    2760 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:00:11.089701    2760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:00:11.089704    2760 out.go:358] Setting ErrFile to fd 2...
	I0918 13:00:11.089706    2760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:00:11.089826    2760 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:00:11.091289    2760 out.go:352] Setting JSON to false
	I0918 13:00:11.108530    2760 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1771,"bootTime":1726687840,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:00:11.108634    2760 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:00:11.113000    2760 out.go:177] * [functional-815000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0918 13:00:11.119850    2760 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:00:11.119912    2760 notify.go:220] Checking for updates...
	I0918 13:00:11.126999    2760 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:00:11.128460    2760 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:00:11.131985    2760 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:00:11.135017    2760 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:00:11.138027    2760 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:00:11.141253    2760 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:00:11.141488    2760 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:00:11.145980    2760 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0918 13:00:11.152973    2760 start.go:297] selected driver: qemu2
	I0918 13:00:11.152985    2760 start.go:901] validating driver "qemu2" against &{Name:functional-815000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-815000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:00:11.153087    2760 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:00:11.159058    2760 out.go:201] 
	W0918 13:00:11.162867    2760 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0918 13:00:11.166928    2760 out.go:201] 
	
	
	==> Docker <==
	Sep 18 20:00:12 functional-815000 dockerd[5643]: time="2024-09-18T20:00:12.150966895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 20:00:12 functional-815000 dockerd[5643]: time="2024-09-18T20:00:12.151003478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 20:00:12 functional-815000 dockerd[5643]: time="2024-09-18T20:00:12.151013311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 20:00:12 functional-815000 dockerd[5643]: time="2024-09-18T20:00:12.151046103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 20:00:12 functional-815000 dockerd[5643]: time="2024-09-18T20:00:12.155640784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 20:00:12 functional-815000 dockerd[5643]: time="2024-09-18T20:00:12.155682241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 20:00:12 functional-815000 dockerd[5643]: time="2024-09-18T20:00:12.155692741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 20:00:12 functional-815000 dockerd[5643]: time="2024-09-18T20:00:12.155735782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 20:00:12 functional-815000 cri-dockerd[5895]: time="2024-09-18T20:00:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cb74fbd51ce3bdcca69dbb69177cca69efd8baa3c839cad7eb8bc610410e4abf/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 18 20:00:12 functional-815000 cri-dockerd[5895]: time="2024-09-18T20:00:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/585b6375b9bff05de8fe72e4e0e9eb4990d160441f4282f28ad0cf49e9245180/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 18 20:00:12 functional-815000 dockerd[5637]: time="2024-09-18T20:00:12.445823338Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 18 20:00:13 functional-815000 dockerd[5643]: time="2024-09-18T20:00:13.032379680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 20:00:13 functional-815000 dockerd[5643]: time="2024-09-18T20:00:13.032417846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 20:00:13 functional-815000 dockerd[5643]: time="2024-09-18T20:00:13.032426762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 20:00:13 functional-815000 dockerd[5643]: time="2024-09-18T20:00:13.032462679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 20:00:13 functional-815000 dockerd[5637]: time="2024-09-18T20:00:13.052299085Z" level=info msg="ignoring event" container=d4d5a5a4dd068c7c5deef5a40ac673c524b23fe003c91655d99c08e6339becfc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 20:00:13 functional-815000 dockerd[5643]: time="2024-09-18T20:00:13.052524331Z" level=info msg="shim disconnected" id=d4d5a5a4dd068c7c5deef5a40ac673c524b23fe003c91655d99c08e6339becfc namespace=moby
	Sep 18 20:00:13 functional-815000 dockerd[5643]: time="2024-09-18T20:00:13.052569331Z" level=warning msg="cleaning up after shim disconnected" id=d4d5a5a4dd068c7c5deef5a40ac673c524b23fe003c91655d99c08e6339becfc namespace=moby
	Sep 18 20:00:13 functional-815000 dockerd[5643]: time="2024-09-18T20:00:13.052573581Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 18 20:00:14 functional-815000 cri-dockerd[5895]: time="2024-09-18T20:00:14Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 18 20:00:14 functional-815000 dockerd[5643]: time="2024-09-18T20:00:14.196624577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 18 20:00:14 functional-815000 dockerd[5643]: time="2024-09-18T20:00:14.196655619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 18 20:00:14 functional-815000 dockerd[5643]: time="2024-09-18T20:00:14.196673618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 20:00:14 functional-815000 dockerd[5643]: time="2024-09-18T20:00:14.196706910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 20:00:14 functional-815000 dockerd[5637]: time="2024-09-18T20:00:14.352692371Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	9d9a2dcd0b2c0       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   3 seconds ago        Running             dashboard-metrics-scraper   0                   cb74fbd51ce3b       dashboard-metrics-scraper-c5db448b4-g9kxp
	d4d5a5a4dd068       72565bf5bbedf                                                                                          5 seconds ago        Exited              echoserver-arm              3                   486ecb238e041       hello-node-64b4f8f9ff-rmnjn
	71cc56c02e54e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    12 seconds ago       Exited              mount-munger                0                   0717568fc76d9       busybox-mount
	c136a3f95e82e       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                          21 seconds ago       Running             myfrontend                  0                   4a88b2c44b9b4       sp-pod
	1d21a2f3b546d       72565bf5bbedf                                                                                          23 seconds ago       Exited              echoserver-arm              2                   f278211c25c23       hello-node-connect-65d86f57f4-ghjvd
	9d153bb2a1fd6       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                          42 seconds ago       Running             nginx                       0                   6af60879fc697       nginx-svc
	8b7e0cb722f53       24a140c548c07                                                                                          About a minute ago   Running             kube-proxy                  0                   855604c0c00ec       kube-proxy-czf54
	187abbf0785ba       2f6c962e7b831                                                                                          About a minute ago   Running             coredns                     0                   4bda9754e8c01       coredns-7c65d6cfc9-v9c9x
	3ebf1f66418ba       2f6c962e7b831                                                                                          About a minute ago   Running             coredns                     0                   15c5b1127c557       coredns-7c65d6cfc9-gnkhl
	a5699cac240f1       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         0                   ddcd48761f572       storage-provisioner
	bc59f42b07c08       d3f53a98c0a9d                                                                                          About a minute ago   Running             kube-apiserver              0                   61174c16d7bcd       kube-apiserver-functional-815000
	dc01b9b4f68c3       27e3830e14027                                                                                          About a minute ago   Running             etcd                        0                   6195ee76187c6       etcd-functional-815000
	928c832792b1f       279f381cb3736                                                                                          About a minute ago   Running             kube-controller-manager     0                   fe79421aaa7fd       kube-controller-manager-functional-815000
	6d61dcab75de7       7f8aa378bb47d                                                                                          About a minute ago   Running             kube-scheduler              0                   0bbf6902d661b       kube-scheduler-functional-815000
	
	
	==> coredns [187abbf0785b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	
	
	==> coredns [3ebf1f66418b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               functional-815000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-815000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=functional-815000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T12_59_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 19:59:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-815000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:00:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:00:07 +0000   Wed, 18 Sep 2024 19:59:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:00:07 +0000   Wed, 18 Sep 2024 19:59:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:00:07 +0000   Wed, 18 Sep 2024 19:59:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:00:07 +0000   Wed, 18 Sep 2024 19:59:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-815000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 2462f539adef4d0fa7e415c69618effe
	  System UUID:                2462f539adef4d0fa7e415c69618effe
	  Boot ID:                    12788d72-61fe-4dc6-9264-598c71e82b38
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-rmnjn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  default                     hello-node-connect-65d86f57f4-ghjvd          0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 coredns-7c65d6cfc9-gnkhl                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     66s
	  kube-system                 coredns-7c65d6cfc9-v9c9x                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     66s
	  kube-system                 etcd-functional-815000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         71s
	  kube-system                 kube-apiserver-functional-815000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-controller-manager-functional-815000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-proxy-czf54                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-scheduler-functional-815000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-g9kxp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-8nrfk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (6%)  340Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s (x2 over 72s)  kubelet          Node functional-815000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x2 over 72s)  kubelet          Node functional-815000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x2 over 72s)  kubelet          Node functional-815000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                node-controller  Node functional-815000 event: Registered Node functional-815000 in Controller
	
	
	==> dmesg <==
	[ +10.756957] systemd-fstab-generator[5146]: Ignoring "noauto" option for root device
	[  +0.052500] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.114826] systemd-fstab-generator[5180]: Ignoring "noauto" option for root device
	[  +0.100912] systemd-fstab-generator[5192]: Ignoring "noauto" option for root device
	[  +0.096378] systemd-fstab-generator[5206]: Ignoring "noauto" option for root device
	[  +5.124901] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.405192] systemd-fstab-generator[5844]: Ignoring "noauto" option for root device
	[  +0.085295] systemd-fstab-generator[5856]: Ignoring "noauto" option for root device
	[  +0.084533] systemd-fstab-generator[5868]: Ignoring "noauto" option for root device
	[  +0.099128] systemd-fstab-generator[5883]: Ignoring "noauto" option for root device
	[  +0.228129] systemd-fstab-generator[6051]: Ignoring "noauto" option for root device
	[  +0.945761] systemd-fstab-generator[6173]: Ignoring "noauto" option for root device
	[Sep18 19:55] kauditd_printk_skb: 189 callbacks suppressed
	[Sep18 19:59] systemd-fstab-generator[17811]: Ignoring "noauto" option for root device
	[  +4.024209] systemd-fstab-generator[18228]: Ignoring "noauto" option for root device
	[  +0.052948] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.619363] systemd-fstab-generator[18349]: Ignoring "noauto" option for root device
	[  +0.044221] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.705467] kauditd_printk_skb: 74 callbacks suppressed
	[ +10.561269] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.460523] kauditd_printk_skb: 34 callbacks suppressed
	[ +11.915194] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.890041] kauditd_printk_skb: 1 callbacks suppressed
	[Sep18 20:00] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.540239] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [dc01b9b4f68c] <==
	{"level":"info","ts":"2024-09-18T19:59:03.057410Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-18T19:59:03.057476Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-18T19:59:03.057485Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-18T19:59:03.059307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-09-18T19:59:03.059349Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-18T19:59:03.340760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-18T19:59:03.340852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-18T19:59:03.340892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 1"}
	{"level":"info","ts":"2024-09-18T19:59:03.340918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 2"}
	{"level":"info","ts":"2024-09-18T19:59:03.340934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-18T19:59:03.340976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 2"}
	{"level":"info","ts":"2024-09-18T19:59:03.340996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-18T19:59:03.348399Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-815000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T19:59:03.348440Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T19:59:03.348469Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:59:03.348640Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T19:59:03.348924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T19:59:03.348957Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T19:59:03.348989Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:59:03.349042Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:59:03.349067Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:59:03.349396Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T19:59:03.349639Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T19:59:03.349918Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-18T19:59:03.353271Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:00:17 up 6 min,  0 users,  load average: 0.92, 0.56, 0.27
	Linux functional-815000 5.10.207 #1 SMP PREEMPT Mon Sep 16 12:01:57 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bc59f42b07c0] <==
	I0918 19:59:04.199968       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0918 19:59:04.200023       1 policy_source.go:224] refreshing policies
	I0918 19:59:04.221890       1 controller.go:615] quota admission added evaluator for: namespaces
	I0918 19:59:04.231309       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 19:59:05.068250       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0918 19:59:05.087565       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0918 19:59:05.087589       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0918 19:59:05.240710       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 19:59:05.251636       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0918 19:59:05.357748       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0918 19:59:05.368462       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0918 19:59:05.369000       1 controller.go:615] quota admission added evaluator for: endpoints
	I0918 19:59:05.379651       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 19:59:06.131016       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0918 19:59:06.147241       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0918 19:59:06.152079       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0918 19:59:06.155958       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0918 19:59:11.614460       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0918 19:59:11.864395       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0918 19:59:22.464568       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.216.31"}
	I0918 19:59:27.464786       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.10.226"}
	I0918 19:59:31.699133       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.74.244"}
	I0918 19:59:42.138487       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.205.70"}
	I0918 20:00:11.860115       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.143.210"}
	I0918 20:00:11.871446       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.111.136"}
	
	
	==> kube-controller-manager [928c832792b1] <==
	I0918 19:59:55.513887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="37.458µs"
	I0918 19:59:58.016071       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="58.458µs"
	I0918 20:00:07.006568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="100.249µs"
	I0918 20:00:07.373022       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-815000"
	I0918 20:00:11.786925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.481012ms"
	E0918 20:00:11.786947       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0918 20:00:11.794723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.649858ms"
	E0918 20:00:11.794743       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0918 20:00:11.799406       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.577696ms"
	E0918 20:00:11.799425       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0918 20:00:11.799084       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.761687ms"
	E0918 20:00:11.799834       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0918 20:00:11.803048       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.151051ms"
	E0918 20:00:11.803311       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0918 20:00:11.812156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.729623ms"
	I0918 20:00:11.817699       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.235289ms"
	I0918 20:00:11.828217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.483593ms"
	I0918 20:00:11.828321       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="49.832µs"
	I0918 20:00:11.828376       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="16.197548ms"
	I0918 20:00:11.828425       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="39.875µs"
	I0918 20:00:11.832923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="21.458µs"
	I0918 20:00:11.850979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.625µs"
	I0918 20:00:13.807424       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="30.874µs"
	I0918 20:00:14.853216       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.68919ms"
	I0918 20:00:14.853763       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="21.624µs"
	
	
	==> kube-proxy [8b7e0cb722f5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0918 19:59:12.899472       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0918 19:59:12.903171       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0918 19:59:12.903229       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 19:59:12.911067       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0918 19:59:12.911083       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0918 19:59:12.911095       1 server_linux.go:169] "Using iptables Proxier"
	I0918 19:59:12.911670       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 19:59:12.911767       1 server.go:483] "Version info" version="v1.31.1"
	I0918 19:59:12.911778       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:59:12.912414       1 config.go:199] "Starting service config controller"
	I0918 19:59:12.912443       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 19:59:12.912466       1 config.go:105] "Starting endpoint slice config controller"
	I0918 19:59:12.912481       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 19:59:12.912633       1 config.go:328] "Starting node config controller"
	I0918 19:59:12.912654       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 19:59:13.015811       1 shared_informer.go:320] Caches are synced for node config
	I0918 19:59:13.015831       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 19:59:13.015942       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [6d61dcab75de] <==
	W0918 19:59:04.275546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:59:04.275583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:59:04.276875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 19:59:04.277000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0918 19:59:04.276891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 19:59:04.277059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:59:04.276920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 19:59:04.277102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:59:04.276933       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:59:04.277157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:59:04.276936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:59:04.277202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:59:04.277302       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 19:59:04.277325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:59:04.277351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:59:04.277364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:59:04.277453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 19:59:04.277501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:59:04.277589       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 19:59:04.277625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:59:05.098385       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:59:05.098598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:59:05.107039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 19:59:05.107293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0918 19:59:05.870437       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 20:00:06 functional-815000 kubelet[18235]: E0918 20:00:06.002565   18235 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 18 20:00:06 functional-815000 kubelet[18235]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 18 20:00:06 functional-815000 kubelet[18235]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 18 20:00:06 functional-815000 kubelet[18235]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 18 20:00:06 functional-815000 kubelet[18235]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 18 20:00:06 functional-815000 kubelet[18235]: I0918 20:00:06.989856   18235 scope.go:117] "RemoveContainer" containerID="1d21a2f3b546de3d6ecc6dfe554eb53a3f39ee031d76999a7896c60cfd92d46d"
	Sep 18 20:00:06 functional-815000 kubelet[18235]: E0918 20:00:06.990583   18235 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-ghjvd_default(7068915e-77da-4fac-9fa2-47a300dd8850)\"" pod="default/hello-node-connect-65d86f57f4-ghjvd" podUID="7068915e-77da-4fac-9fa2-47a300dd8850"
	Sep 18 20:00:07 functional-815000 kubelet[18235]: I0918 20:00:07.930079   18235 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/7eb242a8-75ba-4702-ada5-e163d4f52c17-test-volume\") pod \"7eb242a8-75ba-4702-ada5-e163d4f52c17\" (UID: \"7eb242a8-75ba-4702-ada5-e163d4f52c17\") "
	Sep 18 20:00:07 functional-815000 kubelet[18235]: I0918 20:00:07.930329   18235 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eb242a8-75ba-4702-ada5-e163d4f52c17-test-volume" (OuterVolumeSpecName: "test-volume") pod "7eb242a8-75ba-4702-ada5-e163d4f52c17" (UID: "7eb242a8-75ba-4702-ada5-e163d4f52c17"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 20:00:07 functional-815000 kubelet[18235]: I0918 20:00:07.930339   18235 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpwqc\" (UniqueName: \"kubernetes.io/projected/7eb242a8-75ba-4702-ada5-e163d4f52c17-kube-api-access-tpwqc\") pod \"7eb242a8-75ba-4702-ada5-e163d4f52c17\" (UID: \"7eb242a8-75ba-4702-ada5-e163d4f52c17\") "
	Sep 18 20:00:07 functional-815000 kubelet[18235]: I0918 20:00:07.930398   18235 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/7eb242a8-75ba-4702-ada5-e163d4f52c17-test-volume\") on node \"functional-815000\" DevicePath \"\""
	Sep 18 20:00:07 functional-815000 kubelet[18235]: I0918 20:00:07.931807   18235 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb242a8-75ba-4702-ada5-e163d4f52c17-kube-api-access-tpwqc" (OuterVolumeSpecName: "kube-api-access-tpwqc") pod "7eb242a8-75ba-4702-ada5-e163d4f52c17" (UID: "7eb242a8-75ba-4702-ada5-e163d4f52c17"). InnerVolumeSpecName "kube-api-access-tpwqc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 20:00:08 functional-815000 kubelet[18235]: I0918 20:00:08.030732   18235 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tpwqc\" (UniqueName: \"kubernetes.io/projected/7eb242a8-75ba-4702-ada5-e163d4f52c17-kube-api-access-tpwqc\") on node \"functional-815000\" DevicePath \"\""
	Sep 18 20:00:08 functional-815000 kubelet[18235]: I0918 20:00:08.733535   18235 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0717568fc76d985f9ebb6e9028b3d96a7178a586ad4939f48d18ab3a4efd9c44"
	Sep 18 20:00:11 functional-815000 kubelet[18235]: E0918 20:00:11.813281   18235 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7eb242a8-75ba-4702-ada5-e163d4f52c17" containerName="mount-munger"
	Sep 18 20:00:11 functional-815000 kubelet[18235]: I0918 20:00:11.813311   18235 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb242a8-75ba-4702-ada5-e163d4f52c17" containerName="mount-munger"
	Sep 18 20:00:11 functional-815000 kubelet[18235]: I0918 20:00:11.859961   18235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghtds\" (UniqueName: \"kubernetes.io/projected/fc875144-4325-47c1-935f-f516ce40dcd2-kube-api-access-ghtds\") pod \"kubernetes-dashboard-695b96c756-8nrfk\" (UID: \"fc875144-4325-47c1-935f-f516ce40dcd2\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-8nrfk"
	Sep 18 20:00:11 functional-815000 kubelet[18235]: I0918 20:00:11.859988   18235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fc875144-4325-47c1-935f-f516ce40dcd2-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-8nrfk\" (UID: \"fc875144-4325-47c1-935f-f516ce40dcd2\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-8nrfk"
	Sep 18 20:00:11 functional-815000 kubelet[18235]: I0918 20:00:11.859999   18235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz22t\" (UniqueName: \"kubernetes.io/projected/dae8c5ad-756a-495f-8988-842a8656cdd2-kube-api-access-mz22t\") pod \"dashboard-metrics-scraper-c5db448b4-g9kxp\" (UID: \"dae8c5ad-756a-495f-8988-842a8656cdd2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-g9kxp"
	Sep 18 20:00:11 functional-815000 kubelet[18235]: I0918 20:00:11.860010   18235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/dae8c5ad-756a-495f-8988-842a8656cdd2-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-g9kxp\" (UID: \"dae8c5ad-756a-495f-8988-842a8656cdd2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-g9kxp"
	Sep 18 20:00:12 functional-815000 kubelet[18235]: I0918 20:00:12.988114   18235 scope.go:117] "RemoveContainer" containerID="f2dace7fe122b9bd231bfe6639a03998d0aa32f0fbb72821f853e7911c2a025b"
	Sep 18 20:00:13 functional-815000 kubelet[18235]: I0918 20:00:13.796051   18235 scope.go:117] "RemoveContainer" containerID="f2dace7fe122b9bd231bfe6639a03998d0aa32f0fbb72821f853e7911c2a025b"
	Sep 18 20:00:13 functional-815000 kubelet[18235]: I0918 20:00:13.796206   18235 scope.go:117] "RemoveContainer" containerID="d4d5a5a4dd068c7c5deef5a40ac673c524b23fe003c91655d99c08e6339becfc"
	Sep 18 20:00:13 functional-815000 kubelet[18235]: E0918 20:00:13.796297   18235 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-rmnjn_default(ce1bd7ea-cca0-4d5a-8c56-d0dc267b52df)\"" pod="default/hello-node-64b4f8f9ff-rmnjn" podUID="ce1bd7ea-cca0-4d5a-8c56-d0dc267b52df"
	Sep 18 20:00:14 functional-815000 kubelet[18235]: I0918 20:00:14.847720   18235 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-g9kxp" podStartSLOduration=1.944702968 podStartE2EDuration="3.847697263s" podCreationTimestamp="2024-09-18 20:00:11 +0000 UTC" firstStartedPulling="2024-09-18 20:00:12.230387702 +0000 UTC m=+66.290505954" lastFinishedPulling="2024-09-18 20:00:14.133381956 +0000 UTC m=+68.193500249" observedRunningTime="2024-09-18 20:00:14.847262853 +0000 UTC m=+68.907381146" watchObservedRunningTime="2024-09-18 20:00:14.847697263 +0000 UTC m=+68.907815556"
	
	
	==> storage-provisioner [a5699cac240f] <==
	I0918 19:59:12.124246       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:59:12.128342       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:59:12.128394       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:59:12.131796       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:59:12.132913       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-815000_d7f45b4e-b014-4697-a6f3-0b34b563f004!
	I0918 19:59:12.133255       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c356d0eb-63f1-49f2-b0e5-6c1822e80fa7", APIVersion:"v1", ResourceVersion:"351", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-815000_d7f45b4e-b014-4697-a6f3-0b34b563f004 became leader
	I0918 19:59:12.233519       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-815000_d7f45b4e-b014-4697-a6f3-0b34b563f004!
	I0918 19:59:43.093034       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0918 19:59:43.093205       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    3dba36e0-6ed2-480d-adcc-8ea5c10e9120 300 0 2024-09-18 19:59:11 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-18 19:59:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-ef4876fd-aedf-4457-bd91-66d2f1165299 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  ef4876fd-aedf-4457-bd91-66d2f1165299 516 0 2024-09-18 19:59:43 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-18 19:59:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-18 19:59:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0918 19:59:43.093574       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-ef4876fd-aedf-4457-bd91-66d2f1165299" provisioned
	I0918 19:59:43.093619       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0918 19:59:43.093636       1 volume_store.go:212] Trying to save persistentvolume "pvc-ef4876fd-aedf-4457-bd91-66d2f1165299"
	I0918 19:59:43.094476       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ef4876fd-aedf-4457-bd91-66d2f1165299", APIVersion:"v1", ResourceVersion:"516", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0918 19:59:43.098801       1 volume_store.go:219] persistentvolume "pvc-ef4876fd-aedf-4457-bd91-66d2f1165299" saved
	I0918 19:59:43.098930       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ef4876fd-aedf-4457-bd91-66d2f1165299", APIVersion:"v1", ResourceVersion:"516", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ef4876fd-aedf-4457-bd91-66d2f1165299
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-815000 -n functional-815000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-815000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-695b96c756-8nrfk
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-815000 describe pod busybox-mount kubernetes-dashboard-695b96c756-8nrfk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-815000 describe pod busybox-mount kubernetes-dashboard-695b96c756-8nrfk: exit status 1 (40.851834ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-815000/192.168.105.4
	Start Time:       Wed, 18 Sep 2024 13:00:03 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://71cc56c02e54e63241b8f597fa590d1fecefcc00777d2830389b1118e2fbd203
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 18 Sep 2024 13:00:05 -0700
	      Finished:     Wed, 18 Sep 2024 13:00:05 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tpwqc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tpwqc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  14s   default-scheduler  Successfully assigned default/busybox-mount to functional-815000
	  Normal  Pulling    13s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     12s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.421s (1.421s including waiting). Image size: 3547125 bytes.
	  Normal  Created    12s   kubelet            Created container mount-munger
	  Normal  Started    12s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-8nrfk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-815000 describe pod busybox-mount kubernetes-dashboard-695b96c756-8nrfk: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (36.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (64.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 node stop m02 -v=7 --alsologtostderr
E0918 13:04:29.831609    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:04:32.394982    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:04:37.518238    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-660000 node stop m02 -v=7 --alsologtostderr: (12.169804208s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr
E0918 13:04:47.759775    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr: (25.970802208s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000
E0918 13:05:08.242070    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000: exit status 3 (25.976600625s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 13:05:33.329610    3076 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0918 13:05:33.329623    3076 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-660000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (64.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0918 13:05:49.206495    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:05:56.283307    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (25.978843667s)
ha_test.go:413: expected profile "ha-660000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-660000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-660000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-660000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000: exit status 3 (25.9556275s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 13:06:25.263321    3090 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0918 13:06:25.263330    3090 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-660000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (82.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-660000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.081670792s)

                                                
                                                
-- stdout --
	* Starting "ha-660000-m02" control-plane node in "ha-660000" cluster
	* Restarting existing qemu2 VM for "ha-660000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-660000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:06:25.296548    3097 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:06:25.296802    3097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:06:25.296808    3097 out.go:358] Setting ErrFile to fd 2...
	I0918 13:06:25.296810    3097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:06:25.296951    3097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:06:25.297207    3097 mustload.go:65] Loading cluster: ha-660000
	I0918 13:06:25.297454    3097 config.go:182] Loaded profile config "ha-660000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0918 13:06:25.297702    3097 host.go:58] "ha-660000-m02" host status: Stopped
	I0918 13:06:25.302286    3097 out.go:177] * Starting "ha-660000-m02" control-plane node in "ha-660000" cluster
	I0918 13:06:25.305160    3097 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:06:25.305175    3097 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:06:25.305181    3097 cache.go:56] Caching tarball of preloaded images
	I0918 13:06:25.305248    3097 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:06:25.305254    3097 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:06:25.305311    3097 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/ha-660000/config.json ...
	I0918 13:06:25.305689    3097 start.go:360] acquireMachinesLock for ha-660000-m02: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:06:25.305750    3097 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "ha-660000-m02"
	I0918 13:06:25.305758    3097 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:06:25.305763    3097 fix.go:54] fixHost starting: m02
	I0918 13:06:25.305870    3097 fix.go:112] recreateIfNeeded on ha-660000-m02: state=Stopped err=<nil>
	W0918 13:06:25.305875    3097 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:06:25.309244    3097 out.go:177] * Restarting existing qemu2 VM for "ha-660000-m02" ...
	I0918 13:06:25.313203    3097 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:06:25.313248    3097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:b5:77:7e:86:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/disk.qcow2
	I0918 13:06:25.315831    3097 main.go:141] libmachine: STDOUT: 
	I0918 13:06:25.315846    3097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:06:25.315875    3097 fix.go:56] duration metric: took 10.111666ms for fixHost
	I0918 13:06:25.315882    3097 start.go:83] releasing machines lock for "ha-660000-m02", held for 10.125541ms
	W0918 13:06:25.315888    3097 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:06:25.315918    3097 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:06:25.315922    3097 start.go:729] Will try again in 5 seconds ...
	I0918 13:06:30.317820    3097 start.go:360] acquireMachinesLock for ha-660000-m02: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:06:30.317961    3097 start.go:364] duration metric: took 107.291µs to acquireMachinesLock for "ha-660000-m02"
	I0918 13:06:30.317992    3097 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:06:30.317997    3097 fix.go:54] fixHost starting: m02
	I0918 13:06:30.318168    3097 fix.go:112] recreateIfNeeded on ha-660000-m02: state=Stopped err=<nil>
	W0918 13:06:30.318173    3097 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:06:30.321661    3097 out.go:177] * Restarting existing qemu2 VM for "ha-660000-m02" ...
	I0918 13:06:30.325699    3097 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:06:30.325751    3097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:b5:77:7e:86:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/disk.qcow2
	I0918 13:06:30.327987    3097 main.go:141] libmachine: STDOUT: 
	I0918 13:06:30.328003    3097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:06:30.328041    3097 fix.go:56] duration metric: took 10.043875ms for fixHost
	I0918 13:06:30.328046    3097 start.go:83] releasing machines lock for "ha-660000-m02", held for 10.08075ms
	W0918 13:06:30.328079    3097 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-660000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-660000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:06:30.331594    3097 out.go:201] 
	W0918 13:06:30.335679    3097 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:06:30.335684    3097 out.go:270] * 
	* 
	W0918 13:06:30.337414    3097 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:06:30.341698    3097 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0918 13:06:25.296548    3097 out.go:345] Setting OutFile to fd 1 ...
I0918 13:06:25.296802    3097 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:06:25.296808    3097 out.go:358] Setting ErrFile to fd 2...
I0918 13:06:25.296810    3097 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:06:25.296951    3097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
I0918 13:06:25.297207    3097 mustload.go:65] Loading cluster: ha-660000
I0918 13:06:25.297454    3097 config.go:182] Loaded profile config "ha-660000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0918 13:06:25.297702    3097 host.go:58] "ha-660000-m02" host status: Stopped
I0918 13:06:25.302286    3097 out.go:177] * Starting "ha-660000-m02" control-plane node in "ha-660000" cluster
I0918 13:06:25.305160    3097 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0918 13:06:25.305175    3097 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0918 13:06:25.305181    3097 cache.go:56] Caching tarball of preloaded images
I0918 13:06:25.305248    3097 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0918 13:06:25.305254    3097 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0918 13:06:25.305311    3097 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/ha-660000/config.json ...
I0918 13:06:25.305689    3097 start.go:360] acquireMachinesLock for ha-660000-m02: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0918 13:06:25.305750    3097 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "ha-660000-m02"
I0918 13:06:25.305758    3097 start.go:96] Skipping create...Using existing machine configuration
I0918 13:06:25.305763    3097 fix.go:54] fixHost starting: m02
I0918 13:06:25.305870    3097 fix.go:112] recreateIfNeeded on ha-660000-m02: state=Stopped err=<nil>
W0918 13:06:25.305875    3097 fix.go:138] unexpected machine state, will restart: <nil>
I0918 13:06:25.309244    3097 out.go:177] * Restarting existing qemu2 VM for "ha-660000-m02" ...
I0918 13:06:25.313203    3097 qemu.go:418] Using hvf for hardware acceleration
I0918 13:06:25.313248    3097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:b5:77:7e:86:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/disk.qcow2
I0918 13:06:25.315831    3097 main.go:141] libmachine: STDOUT: 
I0918 13:06:25.315846    3097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0918 13:06:25.315875    3097 fix.go:56] duration metric: took 10.111666ms for fixHost
I0918 13:06:25.315882    3097 start.go:83] releasing machines lock for "ha-660000-m02", held for 10.125541ms
W0918 13:06:25.315888    3097 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0918 13:06:25.315918    3097 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0918 13:06:25.315922    3097 start.go:729] Will try again in 5 seconds ...
I0918 13:06:30.317820    3097 start.go:360] acquireMachinesLock for ha-660000-m02: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0918 13:06:30.317961    3097 start.go:364] duration metric: took 107.291µs to acquireMachinesLock for "ha-660000-m02"
I0918 13:06:30.317992    3097 start.go:96] Skipping create...Using existing machine configuration
I0918 13:06:30.317997    3097 fix.go:54] fixHost starting: m02
I0918 13:06:30.318168    3097 fix.go:112] recreateIfNeeded on ha-660000-m02: state=Stopped err=<nil>
W0918 13:06:30.318173    3097 fix.go:138] unexpected machine state, will restart: <nil>
I0918 13:06:30.321661    3097 out.go:177] * Restarting existing qemu2 VM for "ha-660000-m02" ...
I0918 13:06:30.325699    3097 qemu.go:418] Using hvf for hardware acceleration
I0918 13:06:30.325751    3097 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:b5:77:7e:86:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000-m02/disk.qcow2
I0918 13:06:30.327987    3097 main.go:141] libmachine: STDOUT: 
I0918 13:06:30.328003    3097 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0918 13:06:30.328041    3097 fix.go:56] duration metric: took 10.043875ms for fixHost
I0918 13:06:30.328046    3097 start.go:83] releasing machines lock for "ha-660000-m02", held for 10.08075ms
W0918 13:06:30.328079    3097 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-660000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-660000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0918 13:06:30.331594    3097 out.go:201] 
W0918 13:06:30.335679    3097 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0918 13:06:30.335684    3097 out.go:270] * 
* 
W0918 13:06:30.337414    3097 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0918 13:06:30.341698    3097 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-660000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr: (25.960309792s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
E0918 13:07:11.126691    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:448: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (25.962276709s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:450: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000: exit status 3 (25.965638917s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 13:07:48.230836    3116 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0918 13:07:48.230848    3116 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-660000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (82.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-660000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-660000 -v=7 --alsologtostderr
E0918 13:09:27.245480    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:09:54.963548    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:10:56.271526    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-660000 -v=7 --alsologtostderr: (3m49.010600584s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-660000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-660000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.231146333s)

                                                
                                                
-- stdout --
	* [ha-660000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-660000" primary control-plane node in "ha-660000" cluster
	* Restarting existing qemu2 VM for "ha-660000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-660000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:11:39.373689    3165 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:11:39.373894    3165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:11:39.373899    3165 out.go:358] Setting ErrFile to fd 2...
	I0918 13:11:39.373902    3165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:11:39.374067    3165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:11:39.375287    3165 out.go:352] Setting JSON to false
	I0918 13:11:39.396066    3165 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2459,"bootTime":1726687840,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:11:39.396140    3165 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:11:39.401561    3165 out.go:177] * [ha-660000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:11:39.408475    3165 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:11:39.408499    3165 notify.go:220] Checking for updates...
	I0918 13:11:39.414507    3165 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:11:39.417468    3165 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:11:39.420600    3165 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:11:39.423550    3165 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:11:39.426459    3165 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:11:39.429847    3165 config.go:182] Loaded profile config "ha-660000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:11:39.429907    3165 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:11:39.434536    3165 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:11:39.441482    3165 start.go:297] selected driver: qemu2
	I0918 13:11:39.441491    3165 start.go:901] validating driver "qemu2" against &{Name:ha-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-660000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:11:39.441571    3165 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:11:39.444472    3165 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:11:39.444522    3165 cni.go:84] Creating CNI manager for ""
	I0918 13:11:39.444546    3165 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0918 13:11:39.444617    3165 start.go:340] cluster config:
	{Name:ha-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-660000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:11:39.448825    3165 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:11:39.456536    3165 out.go:177] * Starting "ha-660000" primary control-plane node in "ha-660000" cluster
	I0918 13:11:39.460491    3165 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:11:39.460505    3165 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:11:39.460515    3165 cache.go:56] Caching tarball of preloaded images
	I0918 13:11:39.460582    3165 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:11:39.460588    3165 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:11:39.460664    3165 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/ha-660000/config.json ...
	I0918 13:11:39.461118    3165 start.go:360] acquireMachinesLock for ha-660000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:11:39.461154    3165 start.go:364] duration metric: took 29.666µs to acquireMachinesLock for "ha-660000"
	I0918 13:11:39.461165    3165 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:11:39.461171    3165 fix.go:54] fixHost starting: 
	I0918 13:11:39.461291    3165 fix.go:112] recreateIfNeeded on ha-660000: state=Stopped err=<nil>
	W0918 13:11:39.461300    3165 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:11:39.465497    3165 out.go:177] * Restarting existing qemu2 VM for "ha-660000" ...
	I0918 13:11:39.472387    3165 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:11:39.472419    3165 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:12:ee:c2:38:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/disk.qcow2
	I0918 13:11:39.474559    3165 main.go:141] libmachine: STDOUT: 
	I0918 13:11:39.474578    3165 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:11:39.474610    3165 fix.go:56] duration metric: took 13.44ms for fixHost
	I0918 13:11:39.474615    3165 start.go:83] releasing machines lock for "ha-660000", held for 13.455667ms
	W0918 13:11:39.474622    3165 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:11:39.474662    3165 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:11:39.474667    3165 start.go:729] Will try again in 5 seconds ...
	I0918 13:11:44.476640    3165 start.go:360] acquireMachinesLock for ha-660000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:11:44.477147    3165 start.go:364] duration metric: took 391.5µs to acquireMachinesLock for "ha-660000"
	I0918 13:11:44.477305    3165 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:11:44.477325    3165 fix.go:54] fixHost starting: 
	I0918 13:11:44.478064    3165 fix.go:112] recreateIfNeeded on ha-660000: state=Stopped err=<nil>
	W0918 13:11:44.478089    3165 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:11:44.487502    3165 out.go:177] * Restarting existing qemu2 VM for "ha-660000" ...
	I0918 13:11:44.491527    3165 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:11:44.491824    3165 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:12:ee:c2:38:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/disk.qcow2
	I0918 13:11:44.501388    3165 main.go:141] libmachine: STDOUT: 
	I0918 13:11:44.501441    3165 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:11:44.501537    3165 fix.go:56] duration metric: took 24.214709ms for fixHost
	I0918 13:11:44.501555    3165 start.go:83] releasing machines lock for "ha-660000", held for 24.385417ms
	W0918 13:11:44.501707    3165 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-660000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-660000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:11:44.510488    3165 out.go:201] 
	W0918 13:11:44.514599    3165 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:11:44.514670    3165 out.go:270] * 
	* 
	W0918 13:11:44.517454    3165 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:11:44.527490    3165 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-660000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-660000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000: exit status 7 (36.123542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-660000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.997166ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-660000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-660000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:11:44.673703    3177 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:11:44.673951    3177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:11:44.673955    3177 out.go:358] Setting ErrFile to fd 2...
	I0918 13:11:44.673957    3177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:11:44.674089    3177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:11:44.674315    3177 mustload.go:65] Loading cluster: ha-660000
	I0918 13:11:44.674558    3177 config.go:182] Loaded profile config "ha-660000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0918 13:11:44.674878    3177 out.go:270] ! The control-plane node ha-660000 host is not running (will try others): state=Stopped
	! The control-plane node ha-660000 host is not running (will try others): state=Stopped
	W0918 13:11:44.674992    3177 out.go:270] ! The control-plane node ha-660000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-660000-m02 host is not running (will try others): state=Stopped
	I0918 13:11:44.679410    3177 out.go:177] * The control-plane node ha-660000-m03 host is not running: state=Stopped
	I0918 13:11:44.682447    3177 out.go:177]   To start a cluster, run: "minikube start -p ha-660000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-660000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr: exit status 7 (30.771375ms)

                                                
                                                
-- stdout --
	ha-660000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-660000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-660000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-660000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:11:44.714826    3179 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:11:44.714991    3179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:11:44.714995    3179 out.go:358] Setting ErrFile to fd 2...
	I0918 13:11:44.714997    3179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:11:44.715155    3179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:11:44.715266    3179 out.go:352] Setting JSON to false
	I0918 13:11:44.715275    3179 mustload.go:65] Loading cluster: ha-660000
	I0918 13:11:44.715346    3179 notify.go:220] Checking for updates...
	I0918 13:11:44.715523    3179 config.go:182] Loaded profile config "ha-660000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:11:44.715533    3179 status.go:174] checking status of ha-660000 ...
	I0918 13:11:44.715768    3179 status.go:364] ha-660000 host status = "Stopped" (err=<nil>)
	I0918 13:11:44.715771    3179 status.go:377] host is not running, skipping remaining checks
	I0918 13:11:44.715773    3179 status.go:176] ha-660000 status: &{Name:ha-660000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 13:11:44.715783    3179 status.go:174] checking status of ha-660000-m02 ...
	I0918 13:11:44.715871    3179 status.go:364] ha-660000-m02 host status = "Stopped" (err=<nil>)
	I0918 13:11:44.715874    3179 status.go:377] host is not running, skipping remaining checks
	I0918 13:11:44.715875    3179 status.go:176] ha-660000-m02 status: &{Name:ha-660000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 13:11:44.715881    3179 status.go:174] checking status of ha-660000-m03 ...
	I0918 13:11:44.715965    3179 status.go:364] ha-660000-m03 host status = "Stopped" (err=<nil>)
	I0918 13:11:44.715970    3179 status.go:377] host is not running, skipping remaining checks
	I0918 13:11:44.715971    3179 status.go:176] ha-660000-m03 status: &{Name:ha-660000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 13:11:44.715975    3179 status.go:174] checking status of ha-660000-m04 ...
	I0918 13:11:44.716076    3179 status.go:364] ha-660000-m04 host status = "Stopped" (err=<nil>)
	I0918 13:11:44.716079    3179 status.go:377] host is not running, skipping remaining checks
	I0918 13:11:44.716080    3179 status.go:176] ha-660000-m04 status: &{Name:ha-660000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000: exit status 7 (30.400375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-660000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-660000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-660000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-660000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"k
ubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\"
:\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000: exit status 7 (30.339958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (300.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 stop -v=7 --alsologtostderr
E0918 13:12:19.361492    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:14:27.234910    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:15:56.259080    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-660000 stop -v=7 --alsologtostderr: (5m0.130393916s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr: exit status 7 (64.823584ms)

                                                
                                                
-- stdout --
	ha-660000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-660000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-660000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-660000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:16:45.006373    3233 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:16:45.006552    3233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:16:45.006556    3233 out.go:358] Setting ErrFile to fd 2...
	I0918 13:16:45.006558    3233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:16:45.006715    3233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:16:45.006857    3233 out.go:352] Setting JSON to false
	I0918 13:16:45.006869    3233 mustload.go:65] Loading cluster: ha-660000
	I0918 13:16:45.006910    3233 notify.go:220] Checking for updates...
	I0918 13:16:45.007177    3233 config.go:182] Loaded profile config "ha-660000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:16:45.007189    3233 status.go:174] checking status of ha-660000 ...
	I0918 13:16:45.007494    3233 status.go:364] ha-660000 host status = "Stopped" (err=<nil>)
	I0918 13:16:45.007498    3233 status.go:377] host is not running, skipping remaining checks
	I0918 13:16:45.007501    3233 status.go:176] ha-660000 status: &{Name:ha-660000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 13:16:45.007513    3233 status.go:174] checking status of ha-660000-m02 ...
	I0918 13:16:45.007636    3233 status.go:364] ha-660000-m02 host status = "Stopped" (err=<nil>)
	I0918 13:16:45.007640    3233 status.go:377] host is not running, skipping remaining checks
	I0918 13:16:45.007642    3233 status.go:176] ha-660000-m02 status: &{Name:ha-660000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 13:16:45.007646    3233 status.go:174] checking status of ha-660000-m03 ...
	I0918 13:16:45.007762    3233 status.go:364] ha-660000-m03 host status = "Stopped" (err=<nil>)
	I0918 13:16:45.007767    3233 status.go:377] host is not running, skipping remaining checks
	I0918 13:16:45.007770    3233 status.go:176] ha-660000-m03 status: &{Name:ha-660000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 13:16:45.007778    3233 status.go:174] checking status of ha-660000-m04 ...
	I0918 13:16:45.007907    3233 status.go:364] ha-660000-m04 host status = "Stopped" (err=<nil>)
	I0918 13:16:45.007911    3233 status.go:377] host is not running, skipping remaining checks
	I0918 13:16:45.007913    3233 status.go:176] ha-660000-m04 status: &{Name:ha-660000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr": ha-660000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-660000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-660000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-660000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr": ha-660000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-660000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-660000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-660000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr": ha-660000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-660000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-660000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-660000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000: exit status 7 (32.085833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (300.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-660000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-660000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.174518042s)

                                                
                                                
-- stdout --
	* [ha-660000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-660000" primary control-plane node in "ha-660000" cluster
	* Restarting existing qemu2 VM for "ha-660000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-660000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:16:45.068959    3237 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:16:45.069084    3237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:16:45.069087    3237 out.go:358] Setting ErrFile to fd 2...
	I0918 13:16:45.069090    3237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:16:45.069217    3237 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:16:45.070287    3237 out.go:352] Setting JSON to false
	I0918 13:16:45.086393    3237 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2765,"bootTime":1726687840,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:16:45.086462    3237 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:16:45.091895    3237 out.go:177] * [ha-660000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:16:45.098881    3237 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:16:45.098953    3237 notify.go:220] Checking for updates...
	I0918 13:16:45.105881    3237 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:16:45.108917    3237 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:16:45.111850    3237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:16:45.114866    3237 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:16:45.117889    3237 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:16:45.121184    3237 config.go:182] Loaded profile config "ha-660000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:16:45.121457    3237 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:16:45.125904    3237 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:16:45.128858    3237 start.go:297] selected driver: qemu2
	I0918 13:16:45.128864    3237 start.go:901] validating driver "qemu2" against &{Name:ha-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-660000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:16:45.128933    3237 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:16:45.131370    3237 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:16:45.131396    3237 cni.go:84] Creating CNI manager for ""
	I0918 13:16:45.131416    3237 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0918 13:16:45.131465    3237 start.go:340] cluster config:
	{Name:ha-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-660000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:16:45.135135    3237 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:16:45.142884    3237 out.go:177] * Starting "ha-660000" primary control-plane node in "ha-660000" cluster
	I0918 13:16:45.145898    3237 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:16:45.145914    3237 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:16:45.145932    3237 cache.go:56] Caching tarball of preloaded images
	I0918 13:16:45.146015    3237 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:16:45.146021    3237 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:16:45.146123    3237 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/ha-660000/config.json ...
	I0918 13:16:45.146557    3237 start.go:360] acquireMachinesLock for ha-660000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:16:45.146594    3237 start.go:364] duration metric: took 30.584µs to acquireMachinesLock for "ha-660000"
	I0918 13:16:45.146602    3237 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:16:45.146610    3237 fix.go:54] fixHost starting: 
	I0918 13:16:45.146737    3237 fix.go:112] recreateIfNeeded on ha-660000: state=Stopped err=<nil>
	W0918 13:16:45.146744    3237 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:16:45.150843    3237 out.go:177] * Restarting existing qemu2 VM for "ha-660000" ...
	I0918 13:16:45.157921    3237 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:16:45.157966    3237 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:12:ee:c2:38:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/disk.qcow2
	I0918 13:16:45.159961    3237 main.go:141] libmachine: STDOUT: 
	I0918 13:16:45.159978    3237 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:16:45.160006    3237 fix.go:56] duration metric: took 13.397541ms for fixHost
	I0918 13:16:45.160011    3237 start.go:83] releasing machines lock for "ha-660000", held for 13.41325ms
	W0918 13:16:45.160016    3237 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:16:45.160046    3237 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:16:45.160051    3237 start.go:729] Will try again in 5 seconds ...
	I0918 13:16:50.161969    3237 start.go:360] acquireMachinesLock for ha-660000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:16:50.162306    3237 start.go:364] duration metric: took 274.333µs to acquireMachinesLock for "ha-660000"
	I0918 13:16:50.162429    3237 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:16:50.162446    3237 fix.go:54] fixHost starting: 
	I0918 13:16:50.163113    3237 fix.go:112] recreateIfNeeded on ha-660000: state=Stopped err=<nil>
	W0918 13:16:50.163138    3237 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:16:50.170513    3237 out.go:177] * Restarting existing qemu2 VM for "ha-660000" ...
	I0918 13:16:50.174527    3237 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:16:50.174672    3237 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:12:ee:c2:38:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/ha-660000/disk.qcow2
	I0918 13:16:50.183514    3237 main.go:141] libmachine: STDOUT: 
	I0918 13:16:50.183601    3237 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:16:50.183699    3237 fix.go:56] duration metric: took 21.255583ms for fixHost
	I0918 13:16:50.183716    3237 start.go:83] releasing machines lock for "ha-660000", held for 21.389125ms
	W0918 13:16:50.183860    3237 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-660000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-660000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:16:50.189456    3237 out.go:201] 
	W0918 13:16:50.193571    3237 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:16:50.193596    3237 out.go:270] * 
	* 
	W0918 13:16:50.196249    3237 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:16:50.207509    3237 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-660000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000: exit status 7 (68.263417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-660000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-660000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-660000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-660000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"k
ubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\"
:\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000: exit status 7 (30.134959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-660000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-660000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.502958ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-660000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-660000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:16:50.395856    3255 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:16:50.396020    3255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:16:50.396023    3255 out.go:358] Setting ErrFile to fd 2...
	I0918 13:16:50.396026    3255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:16:50.396170    3255 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:16:50.396413    3255 mustload.go:65] Loading cluster: ha-660000
	I0918 13:16:50.396649    3255 config.go:182] Loaded profile config "ha-660000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0918 13:16:50.396971    3255 out.go:270] ! The control-plane node ha-660000 host is not running (will try others): state=Stopped
	! The control-plane node ha-660000 host is not running (will try others): state=Stopped
	W0918 13:16:50.397080    3255 out.go:270] ! The control-plane node ha-660000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-660000-m02 host is not running (will try others): state=Stopped
	I0918 13:16:50.400127    3255 out.go:177] * The control-plane node ha-660000-m03 host is not running: state=Stopped
	I0918 13:16:50.403939    3255 out.go:177]   To start a cluster, run: "minikube start -p ha-660000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-660000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-660000 -n ha-660000: exit status 7 (30.01575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-684000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-684000 --driver=qemu2 : exit status 80 (10.090912792s)

                                                
                                                
-- stdout --
	* [image-684000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-684000" primary control-plane node in "image-684000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-684000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-684000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-684000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-684000 -n image-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-684000 -n image-684000: exit status 7 (67.536541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.16s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-302000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-302000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.845609375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7c6acdb0-82bb-4be3-8153-3794470b9876","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-302000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aff34a7d-d7f8-47d3-9007-e06abab7b530","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"99f4f3a9-af4d-4d14-aaf7-778a737da975","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig"}}
	{"specversion":"1.0","id":"6e23506d-175c-4474-b87e-e7d135d806c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"02167012-7d91-4dda-b036-ca3052bf72bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"970b0fd0-671b-4bd1-9b47-a3bd6d5925a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube"}}
	{"specversion":"1.0","id":"a6817db4-a7f7-4d22-89ae-7faf338d1742","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"91f57a50-a296-4bd3-8116-33be29614c3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dfa7127e-e061-48e3-90a4-ec421fd0a81c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"651fa7d1-05d0-4c8f-8678-cdf003d9144f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-302000\" primary control-plane node in \"json-output-302000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"17f6966c-7f5e-4e78-9f35-f45c742cb1a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"4d407463-676d-42ac-8a91-cc9575b57699","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-302000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"0fdc0732-aa98-4361-b26c-f1372a139c37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"0b2170f5-7305-4f15-a7f3-cb406076c572","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"235127d3-7bc5-4059-b454-025d87d9fdd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-302000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"89620b55-ef52-47c8-bbe9-18236a8ad3dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"2bdbc544-f384-4802-a1fe-2b1996cce601","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-302000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-302000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-302000 --output=json --user=testUser: exit status 83 (80.506125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ead496d-dbf8-40f2-9a44-385938dd3742","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-302000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"8590881f-b2cd-4f68-8a85-2f891893eeaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-302000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-302000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-302000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-302000 --output=json --user=testUser: exit status 83 (46.123375ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-302000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-302000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-302000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-302000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-762000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-762000 --driver=qemu2 : exit status 80 (9.963196125s)

                                                
                                                
-- stdout --
	* [first-762000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-762000" primary control-plane node in "first-762000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-762000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-762000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-762000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-18 13:17:23.512105 -0700 PDT m=+2414.658578460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-763000 -n second-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-763000 -n second-763000: exit status 85 (81.359709ms)

                                                
                                                
-- stdout --
	* Profile "second-763000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-763000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-763000" host is not running, skipping log retrieval (state="* Profile \"second-763000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-763000\"")
helpers_test.go:175: Cleaning up "second-763000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-763000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-18 13:17:23.703513 -0700 PDT m=+2414.849993543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-762000 -n first-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-762000 -n first-762000: exit status 7 (30.158625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-762000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-762000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-762000
--- FAIL: TestMinikubeProfile (10.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-552000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-552000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.913211291s)

                                                
                                                
-- stdout --
	* [mount-start-1-552000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-552000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-552000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-552000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-552000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-552000 -n mount-start-1-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-552000 -n mount-start-1-552000: exit status 7 (71.528417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-400000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-400000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.85046075s)

                                                
                                                
-- stdout --
	* [multinode-400000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-400000" primary control-plane node in "multinode-400000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-400000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:17:34.011488    3392 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:17:34.011626    3392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:17:34.011629    3392 out.go:358] Setting ErrFile to fd 2...
	I0918 13:17:34.011632    3392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:17:34.011774    3392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:17:34.012844    3392 out.go:352] Setting JSON to false
	I0918 13:17:34.029027    3392 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2814,"bootTime":1726687840,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:17:34.029094    3392 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:17:34.035616    3392 out.go:177] * [multinode-400000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:17:34.043568    3392 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:17:34.043608    3392 notify.go:220] Checking for updates...
	I0918 13:17:34.050048    3392 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:17:34.053553    3392 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:17:34.056526    3392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:17:34.059567    3392 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:17:34.062497    3392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:17:34.065684    3392 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:17:34.070518    3392 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:17:34.077552    3392 start.go:297] selected driver: qemu2
	I0918 13:17:34.077561    3392 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:17:34.077569    3392 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:17:34.079837    3392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:17:34.083579    3392 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:17:34.086564    3392 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:17:34.086584    3392 cni.go:84] Creating CNI manager for ""
	I0918 13:17:34.086614    3392 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0918 13:17:34.086618    3392 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 13:17:34.086653    3392 start.go:340] cluster config:
	{Name:multinode-400000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:17:34.090294    3392 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:17:34.097423    3392 out.go:177] * Starting "multinode-400000" primary control-plane node in "multinode-400000" cluster
	I0918 13:17:34.101527    3392 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:17:34.101543    3392 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:17:34.101548    3392 cache.go:56] Caching tarball of preloaded images
	I0918 13:17:34.101615    3392 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:17:34.101620    3392 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:17:34.101816    3392 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/multinode-400000/config.json ...
	I0918 13:17:34.101827    3392 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/multinode-400000/config.json: {Name:mk66ebb6988f3056f1cccc7f138af1a80ba34b65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:17:34.102042    3392 start.go:360] acquireMachinesLock for multinode-400000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:17:34.102076    3392 start.go:364] duration metric: took 27.708µs to acquireMachinesLock for "multinode-400000"
	I0918 13:17:34.102086    3392 start.go:93] Provisioning new machine with config: &{Name:multinode-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:17:34.102118    3392 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:17:34.109505    3392 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:17:34.127116    3392 start.go:159] libmachine.API.Create for "multinode-400000" (driver="qemu2")
	I0918 13:17:34.127149    3392 client.go:168] LocalClient.Create starting
	I0918 13:17:34.127206    3392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:17:34.127237    3392 main.go:141] libmachine: Decoding PEM data...
	I0918 13:17:34.127246    3392 main.go:141] libmachine: Parsing certificate...
	I0918 13:17:34.127286    3392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:17:34.127309    3392 main.go:141] libmachine: Decoding PEM data...
	I0918 13:17:34.127319    3392 main.go:141] libmachine: Parsing certificate...
	I0918 13:17:34.127707    3392 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:17:34.286670    3392 main.go:141] libmachine: Creating SSH key...
	I0918 13:17:34.351925    3392 main.go:141] libmachine: Creating Disk image...
	I0918 13:17:34.351931    3392 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:17:34.352111    3392 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2
	I0918 13:17:34.361673    3392 main.go:141] libmachine: STDOUT: 
	I0918 13:17:34.361694    3392 main.go:141] libmachine: STDERR: 
	I0918 13:17:34.361749    3392 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2 +20000M
	I0918 13:17:34.369826    3392 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:17:34.369839    3392 main.go:141] libmachine: STDERR: 
	I0918 13:17:34.369856    3392 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2
	I0918 13:17:34.369860    3392 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:17:34.369870    3392 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:17:34.369907    3392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:5b:2a:49:c7:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2
	I0918 13:17:34.371515    3392 main.go:141] libmachine: STDOUT: 
	I0918 13:17:34.371529    3392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:17:34.371549    3392 client.go:171] duration metric: took 244.404041ms to LocalClient.Create
	I0918 13:17:36.373702    3392 start.go:128] duration metric: took 2.271656625s to createHost
	I0918 13:17:36.373762    3392 start.go:83] releasing machines lock for "multinode-400000", held for 2.271771584s
	W0918 13:17:36.373859    3392 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:17:36.390004    3392 out.go:177] * Deleting "multinode-400000" in qemu2 ...
	W0918 13:17:36.421874    3392 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:17:36.421898    3392 start.go:729] Will try again in 5 seconds ...
	I0918 13:17:41.423982    3392 start.go:360] acquireMachinesLock for multinode-400000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:17:41.424620    3392 start.go:364] duration metric: took 502.916µs to acquireMachinesLock for "multinode-400000"
	I0918 13:17:41.424790    3392 start.go:93] Provisioning new machine with config: &{Name:multinode-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:17:41.425068    3392 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:17:41.443785    3392 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:17:41.496341    3392 start.go:159] libmachine.API.Create for "multinode-400000" (driver="qemu2")
	I0918 13:17:41.496394    3392 client.go:168] LocalClient.Create starting
	I0918 13:17:41.496510    3392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:17:41.496579    3392 main.go:141] libmachine: Decoding PEM data...
	I0918 13:17:41.496597    3392 main.go:141] libmachine: Parsing certificate...
	I0918 13:17:41.496662    3392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:17:41.496707    3392 main.go:141] libmachine: Decoding PEM data...
	I0918 13:17:41.496718    3392 main.go:141] libmachine: Parsing certificate...
	I0918 13:17:41.497305    3392 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:17:41.666681    3392 main.go:141] libmachine: Creating SSH key...
	I0918 13:17:41.759460    3392 main.go:141] libmachine: Creating Disk image...
	I0918 13:17:41.759466    3392 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:17:41.759661    3392 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2
	I0918 13:17:41.769258    3392 main.go:141] libmachine: STDOUT: 
	I0918 13:17:41.769274    3392 main.go:141] libmachine: STDERR: 
	I0918 13:17:41.769340    3392 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2 +20000M
	I0918 13:17:41.777332    3392 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:17:41.777359    3392 main.go:141] libmachine: STDERR: 
	I0918 13:17:41.777371    3392 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2
	I0918 13:17:41.777376    3392 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:17:41.777384    3392 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:17:41.777414    3392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:74:9a:1c:0b:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2
	I0918 13:17:41.778996    3392 main.go:141] libmachine: STDOUT: 
	I0918 13:17:41.779010    3392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:17:41.779023    3392 client.go:171] duration metric: took 282.63675ms to LocalClient.Create
	I0918 13:17:43.781121    3392 start.go:128] duration metric: took 2.356113792s to createHost
	I0918 13:17:43.781178    3392 start.go:83] releasing machines lock for "multinode-400000", held for 2.356630208s
	W0918 13:17:43.781665    3392 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-400000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-400000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:17:43.796346    3392 out.go:201] 
	W0918 13:17:43.800526    3392 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:17:43.800553    3392 out.go:270] * 
	* 
	W0918 13:17:43.803158    3392 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:17:43.819129    3392 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-400000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (66.857542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (71.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (127.673167ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-400000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- rollout status deployment/busybox: exit status 1 (59.258833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.916417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0918 13:17:44.149216    1516 retry.go:31] will retry after 1.17238131s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.750375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0918 13:17:45.428626    1516 retry.go:31] will retry after 1.740033895s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.605166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0918 13:17:47.274613    1516 retry.go:31] will retry after 2.516721796s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.459458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0918 13:17:49.897596    1516 retry.go:31] will retry after 5.003064456s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.553209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0918 13:17:55.009412    1516 retry.go:31] will retry after 6.918194016s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.644042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0918 13:18:02.031519    1516 retry.go:31] will retry after 8.758889988s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.084334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0918 13:18:10.896632    1516 retry.go:31] will retry after 6.881227921s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.5255ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0918 13:18:17.883459    1516 retry.go:31] will retry after 15.628657769s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.644292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0918 13:18:33.618565    1516 retry.go:31] will retry after 21.603550826s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.836625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.986416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.756292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.271208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.712041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (30.058708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (71.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-400000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.929583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (29.595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-400000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-400000 -v 3 --alsologtostderr: exit status 83 (44.26575ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-400000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-400000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:18:55.702373    3477 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:18:55.702538    3477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:55.702541    3477 out.go:358] Setting ErrFile to fd 2...
	I0918 13:18:55.702543    3477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:55.702674    3477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:18:55.702906    3477 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:18:55.703120    3477 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:18:55.706906    3477 out.go:177] * The control-plane node multinode-400000 host is not running: state=Stopped
	I0918 13:18:55.713342    3477 out.go:177]   To start a cluster, run: "minikube start -p multinode-400000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-400000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (30.446208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-400000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-400000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.957208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-400000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-400000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-400000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (30.517917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-400000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-400000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-400000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-400000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (29.613666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status --output json --alsologtostderr: exit status 7 (29.936333ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-400000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:18:55.912323    3489 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:18:55.912489    3489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:55.912493    3489 out.go:358] Setting ErrFile to fd 2...
	I0918 13:18:55.912495    3489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:55.912644    3489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:18:55.912775    3489 out.go:352] Setting JSON to true
	I0918 13:18:55.912788    3489 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:18:55.912843    3489 notify.go:220] Checking for updates...
	I0918 13:18:55.912982    3489 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:18:55.912990    3489 status.go:174] checking status of multinode-400000 ...
	I0918 13:18:55.913216    3489 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:18:55.913220    3489 status.go:377] host is not running, skipping remaining checks
	I0918 13:18:55.913222    3489 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-400000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (30.454458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 node stop m03: exit status 85 (48.115334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-400000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status: exit status 7 (30.419458ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status --alsologtostderr: exit status 7 (29.809833ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:18:56.051883    3497 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:18:56.052044    3497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:56.052048    3497 out.go:358] Setting ErrFile to fd 2...
	I0918 13:18:56.052050    3497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:56.052182    3497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:18:56.052304    3497 out.go:352] Setting JSON to false
	I0918 13:18:56.052314    3497 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:18:56.052399    3497 notify.go:220] Checking for updates...
	I0918 13:18:56.052532    3497 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:18:56.052541    3497 status.go:174] checking status of multinode-400000 ...
	I0918 13:18:56.052772    3497 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:18:56.052775    3497 status.go:377] host is not running, skipping remaining checks
	I0918 13:18:56.052777    3497 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-400000 status --alsologtostderr": multinode-400000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (30.3285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (46.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.892458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:18:56.112819    3501 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:18:56.113061    3501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:56.113064    3501 out.go:358] Setting ErrFile to fd 2...
	I0918 13:18:56.113067    3501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:56.113221    3501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:18:56.113467    3501 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:18:56.113664    3501 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:18:56.118267    3501 out.go:201] 
	W0918 13:18:56.121350    3501 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0918 13:18:56.121356    3501 out.go:270] * 
	* 
	W0918 13:18:56.123075    3501 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:18:56.126191    3501 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0918 13:18:56.112819    3501 out.go:345] Setting OutFile to fd 1 ...
I0918 13:18:56.113061    3501 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:18:56.113064    3501 out.go:358] Setting ErrFile to fd 2...
I0918 13:18:56.113067    3501 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:18:56.113221    3501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
I0918 13:18:56.113467    3501 mustload.go:65] Loading cluster: multinode-400000
I0918 13:18:56.113664    3501 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 13:18:56.118267    3501 out.go:201] 
W0918 13:18:56.121350    3501 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0918 13:18:56.121356    3501 out.go:270] * 
* 
W0918 13:18:56.123075    3501 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0918 13:18:56.126191    3501 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-400000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr: exit status 7 (30.197083ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:18:56.159780    3503 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:18:56.159940    3503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:56.159944    3503 out.go:358] Setting ErrFile to fd 2...
	I0918 13:18:56.159946    3503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:56.160074    3503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:18:56.160184    3503 out.go:352] Setting JSON to false
	I0918 13:18:56.160194    3503 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:18:56.160260    3503 notify.go:220] Checking for updates...
	I0918 13:18:56.160408    3503 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:18:56.160417    3503 status.go:174] checking status of multinode-400000 ...
	I0918 13:18:56.160661    3503 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:18:56.160665    3503 status.go:377] host is not running, skipping remaining checks
	I0918 13:18:56.160667    3503 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0918 13:18:56.161507    1516 retry.go:31] will retry after 1.127709687s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr: exit status 7 (73.134375ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:18:57.361160    3505 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:18:57.361345    3505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:57.361349    3505 out.go:358] Setting ErrFile to fd 2...
	I0918 13:18:57.361352    3505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:57.361515    3505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:18:57.361669    3505 out.go:352] Setting JSON to false
	I0918 13:18:57.361681    3505 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:18:57.361721    3505 notify.go:220] Checking for updates...
	I0918 13:18:57.361946    3505 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:18:57.361960    3505 status.go:174] checking status of multinode-400000 ...
	I0918 13:18:57.362263    3505 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:18:57.362268    3505 status.go:377] host is not running, skipping remaining checks
	I0918 13:18:57.362271    3505 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0918 13:18:57.363253    1516 retry.go:31] will retry after 2.202653466s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr: exit status 7 (73.565958ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:18:59.637705    3508 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:18:59.637898    3508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:59.637903    3508 out.go:358] Setting ErrFile to fd 2...
	I0918 13:18:59.637907    3508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:18:59.638112    3508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:18:59.638281    3508 out.go:352] Setting JSON to false
	I0918 13:18:59.638295    3508 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:18:59.638329    3508 notify.go:220] Checking for updates...
	I0918 13:18:59.638631    3508 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:18:59.638646    3508 status.go:174] checking status of multinode-400000 ...
	I0918 13:18:59.638982    3508 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:18:59.638989    3508 status.go:377] host is not running, skipping remaining checks
	I0918 13:18:59.638992    3508 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0918 13:18:59.640123    1516 retry.go:31] will retry after 2.011305257s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr: exit status 7 (73.6255ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:19:01.725072    3510 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:19:01.725277    3510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:01.725281    3510 out.go:358] Setting ErrFile to fd 2...
	I0918 13:19:01.725285    3510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:01.725461    3510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:19:01.725629    3510 out.go:352] Setting JSON to false
	I0918 13:19:01.725644    3510 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:19:01.725690    3510 notify.go:220] Checking for updates...
	I0918 13:19:01.725930    3510 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:19:01.725942    3510 status.go:174] checking status of multinode-400000 ...
	I0918 13:19:01.726248    3510 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:19:01.726253    3510 status.go:377] host is not running, skipping remaining checks
	I0918 13:19:01.726255    3510 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0918 13:19:01.727305    1516 retry.go:31] will retry after 4.244013538s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr: exit status 7 (73.809959ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:19:06.045001    3512 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:19:06.045202    3512 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:06.045206    3512 out.go:358] Setting ErrFile to fd 2...
	I0918 13:19:06.045209    3512 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:06.045403    3512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:19:06.045568    3512 out.go:352] Setting JSON to false
	I0918 13:19:06.045579    3512 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:19:06.045630    3512 notify.go:220] Checking for updates...
	I0918 13:19:06.045859    3512 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:19:06.045871    3512 status.go:174] checking status of multinode-400000 ...
	I0918 13:19:06.046230    3512 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:19:06.046235    3512 status.go:377] host is not running, skipping remaining checks
	I0918 13:19:06.046238    3512 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0918 13:19:06.047315    1516 retry.go:31] will retry after 3.078002696s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr: exit status 7 (75.607167ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:19:09.200974    3515 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:19:09.201193    3515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:09.201198    3515 out.go:358] Setting ErrFile to fd 2...
	I0918 13:19:09.201202    3515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:09.201434    3515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:19:09.201596    3515 out.go:352] Setting JSON to false
	I0918 13:19:09.201609    3515 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:19:09.201652    3515 notify.go:220] Checking for updates...
	I0918 13:19:09.201891    3515 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:19:09.201902    3515 status.go:174] checking status of multinode-400000 ...
	I0918 13:19:09.202255    3515 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:19:09.202261    3515 status.go:377] host is not running, skipping remaining checks
	I0918 13:19:09.202264    3515 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0918 13:19:09.203365    1516 retry.go:31] will retry after 5.546025634s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr: exit status 7 (73.630541ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:19:14.822703    3517 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:19:14.822926    3517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:14.822932    3517 out.go:358] Setting ErrFile to fd 2...
	I0918 13:19:14.822936    3517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:14.823134    3517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:19:14.823317    3517 out.go:352] Setting JSON to false
	I0918 13:19:14.823330    3517 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:19:14.823376    3517 notify.go:220] Checking for updates...
	I0918 13:19:14.823644    3517 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:19:14.823657    3517 status.go:174] checking status of multinode-400000 ...
	I0918 13:19:14.824004    3517 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:19:14.824009    3517 status.go:377] host is not running, skipping remaining checks
	I0918 13:19:14.824012    3517 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0918 13:19:14.825176    1516 retry.go:31] will retry after 12.556503676s: exit status 7
E0918 13:19:27.222381    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr: exit status 7 (74.492875ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:19:27.455689    3523 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:19:27.455907    3523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:27.455912    3523 out.go:358] Setting ErrFile to fd 2...
	I0918 13:19:27.455916    3523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:27.456108    3523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:19:27.456295    3523 out.go:352] Setting JSON to false
	I0918 13:19:27.456308    3523 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:19:27.456342    3523 notify.go:220] Checking for updates...
	I0918 13:19:27.456603    3523 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:19:27.456616    3523 status.go:174] checking status of multinode-400000 ...
	I0918 13:19:27.456983    3523 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:19:27.456988    3523 status.go:377] host is not running, skipping remaining checks
	I0918 13:19:27.456990    3523 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0918 13:19:27.458010    1516 retry.go:31] will retry after 15.422899147s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr: exit status 7 (73.946084ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:19:42.954418    3525 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:19:42.954622    3525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:42.954627    3525 out.go:358] Setting ErrFile to fd 2...
	I0918 13:19:42.954631    3525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:42.954838    3525 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:19:42.954996    3525 out.go:352] Setting JSON to false
	I0918 13:19:42.955009    3525 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:19:42.955063    3525 notify.go:220] Checking for updates...
	I0918 13:19:42.955312    3525 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:19:42.955326    3525 status.go:174] checking status of multinode-400000 ...
	I0918 13:19:42.955665    3525 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:19:42.955671    3525 status.go:377] host is not running, skipping remaining checks
	I0918 13:19:42.955674    3525 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-400000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (34.449125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (46.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-400000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-400000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-400000: (2.849335917s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-400000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-400000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219138584s)

                                                
                                                
-- stdout --
	* [multinode-400000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-400000" primary control-plane node in "multinode-400000" cluster
	* Restarting existing qemu2 VM for "multinode-400000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-400000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:19:45.931146    3549 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:19:45.931310    3549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:45.931315    3549 out.go:358] Setting ErrFile to fd 2...
	I0918 13:19:45.931318    3549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:45.931509    3549 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:19:45.932692    3549 out.go:352] Setting JSON to false
	I0918 13:19:45.952029    3549 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2945,"bootTime":1726687840,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:19:45.952099    3549 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:19:45.956711    3549 out.go:177] * [multinode-400000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:19:45.963593    3549 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:19:45.963635    3549 notify.go:220] Checking for updates...
	I0918 13:19:45.970649    3549 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:19:45.973649    3549 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:19:45.976656    3549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:19:45.979675    3549 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:19:45.982613    3549 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:19:45.985956    3549 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:19:45.986025    3549 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:19:45.990690    3549 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:19:45.997610    3549 start.go:297] selected driver: qemu2
	I0918 13:19:45.997618    3549 start.go:901] validating driver "qemu2" against &{Name:multinode-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:19:45.997681    3549 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:19:46.000075    3549 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:19:46.000102    3549 cni.go:84] Creating CNI manager for ""
	I0918 13:19:46.000131    3549 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0918 13:19:46.000199    3549 start.go:340] cluster config:
	{Name:multinode-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-400000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:19:46.003965    3549 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:19:46.011586    3549 out.go:177] * Starting "multinode-400000" primary control-plane node in "multinode-400000" cluster
	I0918 13:19:46.015691    3549 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:19:46.015710    3549 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:19:46.015722    3549 cache.go:56] Caching tarball of preloaded images
	I0918 13:19:46.015805    3549 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:19:46.015812    3549 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:19:46.015868    3549 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/multinode-400000/config.json ...
	I0918 13:19:46.016376    3549 start.go:360] acquireMachinesLock for multinode-400000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:19:46.016415    3549 start.go:364] duration metric: took 32.834µs to acquireMachinesLock for "multinode-400000"
	I0918 13:19:46.016425    3549 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:19:46.016432    3549 fix.go:54] fixHost starting: 
	I0918 13:19:46.016568    3549 fix.go:112] recreateIfNeeded on multinode-400000: state=Stopped err=<nil>
	W0918 13:19:46.016582    3549 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:19:46.024622    3549 out.go:177] * Restarting existing qemu2 VM for "multinode-400000" ...
	I0918 13:19:46.028669    3549 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:19:46.028712    3549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:74:9a:1c:0b:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2
	I0918 13:19:46.030944    3549 main.go:141] libmachine: STDOUT: 
	I0918 13:19:46.030967    3549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:19:46.031004    3549 fix.go:56] duration metric: took 14.572ms for fixHost
	I0918 13:19:46.031009    3549 start.go:83] releasing machines lock for "multinode-400000", held for 14.589375ms
	W0918 13:19:46.031018    3549 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:19:46.031066    3549 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:19:46.031071    3549 start.go:729] Will try again in 5 seconds ...
	I0918 13:19:51.033072    3549 start.go:360] acquireMachinesLock for multinode-400000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:19:51.033524    3549 start.go:364] duration metric: took 355.791µs to acquireMachinesLock for "multinode-400000"
	I0918 13:19:51.033669    3549 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:19:51.033692    3549 fix.go:54] fixHost starting: 
	I0918 13:19:51.034431    3549 fix.go:112] recreateIfNeeded on multinode-400000: state=Stopped err=<nil>
	W0918 13:19:51.034461    3549 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:19:51.039047    3549 out.go:177] * Restarting existing qemu2 VM for "multinode-400000" ...
	I0918 13:19:51.046955    3549 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:19:51.047214    3549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:74:9a:1c:0b:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2
	I0918 13:19:51.056914    3549 main.go:141] libmachine: STDOUT: 
	I0918 13:19:51.056965    3549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:19:51.057060    3549 fix.go:56] duration metric: took 23.368709ms for fixHost
	I0918 13:19:51.057077    3549 start.go:83] releasing machines lock for "multinode-400000", held for 23.530625ms
	W0918 13:19:51.057268    3549 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-400000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-400000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:19:51.063865    3549 out.go:201] 
	W0918 13:19:51.067061    3549 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:19:51.067084    3549 out.go:270] * 
	* 
	W0918 13:19:51.069795    3549 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:19:51.077018    3549 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-400000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-400000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (32.286583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 node delete m03: exit status 83 (40.114917ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-400000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-400000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-400000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status --alsologtostderr: exit status 7 (30.434042ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:19:51.261448    3569 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:19:51.261595    3569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:51.261598    3569 out.go:358] Setting ErrFile to fd 2...
	I0918 13:19:51.261601    3569 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:51.261733    3569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:19:51.261850    3569 out.go:352] Setting JSON to false
	I0918 13:19:51.261859    3569 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:19:51.261911    3569 notify.go:220] Checking for updates...
	I0918 13:19:51.262057    3569 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:19:51.262065    3569 status.go:174] checking status of multinode-400000 ...
	I0918 13:19:51.262298    3569 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:19:51.262302    3569 status.go:377] host is not running, skipping remaining checks
	I0918 13:19:51.262304    3569 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-400000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (30.512708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-400000 stop: (1.957949958s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status: exit status 7 (64.777542ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-400000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-400000 status --alsologtostderr: exit status 7 (32.756708ms)

                                                
                                                
-- stdout --
	multinode-400000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:19:53.347983    3585 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:19:53.348125    3585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:53.348129    3585 out.go:358] Setting ErrFile to fd 2...
	I0918 13:19:53.348131    3585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:53.348250    3585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:19:53.348366    3585 out.go:352] Setting JSON to false
	I0918 13:19:53.348375    3585 mustload.go:65] Loading cluster: multinode-400000
	I0918 13:19:53.348437    3585 notify.go:220] Checking for updates...
	I0918 13:19:53.348590    3585 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:19:53.348599    3585 status.go:174] checking status of multinode-400000 ...
	I0918 13:19:53.348840    3585 status.go:364] multinode-400000 host status = "Stopped" (err=<nil>)
	I0918 13:19:53.348843    3585 status.go:377] host is not running, skipping remaining checks
	I0918 13:19:53.348845    3585 status.go:176] multinode-400000 status: &{Name:multinode-400000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-400000 status --alsologtostderr": multinode-400000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-400000 status --alsologtostderr": multinode-400000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (30.484334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-400000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-400000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180428875s)

                                                
                                                
-- stdout --
	* [multinode-400000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-400000" primary control-plane node in "multinode-400000" cluster
	* Restarting existing qemu2 VM for "multinode-400000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-400000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:19:53.407503    3589 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:19:53.407640    3589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:53.407645    3589 out.go:358] Setting ErrFile to fd 2...
	I0918 13:19:53.407648    3589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:19:53.407757    3589 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:19:53.408747    3589 out.go:352] Setting JSON to false
	I0918 13:19:53.424615    3589 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2953,"bootTime":1726687840,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:19:53.424688    3589 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:19:53.429397    3589 out.go:177] * [multinode-400000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:19:53.437362    3589 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:19:53.437436    3589 notify.go:220] Checking for updates...
	I0918 13:19:53.445194    3589 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:19:53.449373    3589 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:19:53.452375    3589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:19:53.453742    3589 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:19:53.457332    3589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:19:53.460686    3589 config.go:182] Loaded profile config "multinode-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:19:53.460952    3589 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:19:53.465190    3589 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:19:53.472393    3589 start.go:297] selected driver: qemu2
	I0918 13:19:53.472403    3589 start.go:901] validating driver "qemu2" against &{Name:multinode-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:19:53.472471    3589 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:19:53.474628    3589 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:19:53.474654    3589 cni.go:84] Creating CNI manager for ""
	I0918 13:19:53.474673    3589 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0918 13:19:53.474724    3589 start.go:340] cluster config:
	{Name:multinode-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-400000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:19:53.478149    3589 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:19:53.486305    3589 out.go:177] * Starting "multinode-400000" primary control-plane node in "multinode-400000" cluster
	I0918 13:19:53.490316    3589 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:19:53.490329    3589 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:19:53.490343    3589 cache.go:56] Caching tarball of preloaded images
	I0918 13:19:53.490386    3589 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:19:53.490391    3589 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:19:53.490443    3589 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/multinode-400000/config.json ...
	I0918 13:19:53.490879    3589 start.go:360] acquireMachinesLock for multinode-400000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:19:53.490907    3589 start.go:364] duration metric: took 21.625µs to acquireMachinesLock for "multinode-400000"
	I0918 13:19:53.490915    3589 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:19:53.490922    3589 fix.go:54] fixHost starting: 
	I0918 13:19:53.491035    3589 fix.go:112] recreateIfNeeded on multinode-400000: state=Stopped err=<nil>
	W0918 13:19:53.491043    3589 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:19:53.495306    3589 out.go:177] * Restarting existing qemu2 VM for "multinode-400000" ...
	I0918 13:19:53.499386    3589 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:19:53.499429    3589 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:74:9a:1c:0b:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2
	I0918 13:19:53.501394    3589 main.go:141] libmachine: STDOUT: 
	I0918 13:19:53.501425    3589 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:19:53.501458    3589 fix.go:56] duration metric: took 10.536208ms for fixHost
	I0918 13:19:53.501463    3589 start.go:83] releasing machines lock for "multinode-400000", held for 10.552584ms
	W0918 13:19:53.501469    3589 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:19:53.501516    3589 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:19:53.501520    3589 start.go:729] Will try again in 5 seconds ...
	I0918 13:19:58.503561    3589 start.go:360] acquireMachinesLock for multinode-400000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:19:58.503951    3589 start.go:364] duration metric: took 303.75µs to acquireMachinesLock for "multinode-400000"
	I0918 13:19:58.504079    3589 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:19:58.504098    3589 fix.go:54] fixHost starting: 
	I0918 13:19:58.504768    3589 fix.go:112] recreateIfNeeded on multinode-400000: state=Stopped err=<nil>
	W0918 13:19:58.504793    3589 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:19:58.509211    3589 out.go:177] * Restarting existing qemu2 VM for "multinode-400000" ...
	I0918 13:19:58.513195    3589 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:19:58.513423    3589 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:74:9a:1c:0b:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/multinode-400000/disk.qcow2
	I0918 13:19:58.522775    3589 main.go:141] libmachine: STDOUT: 
	I0918 13:19:58.522845    3589 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:19:58.522939    3589 fix.go:56] duration metric: took 18.843875ms for fixHost
	I0918 13:19:58.522967    3589 start.go:83] releasing machines lock for "multinode-400000", held for 18.9955ms
	W0918 13:19:58.523164    3589 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-400000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-400000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:19:58.531253    3589 out.go:201] 
	W0918 13:19:58.535249    3589 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:19:58.535288    3589 out.go:270] * 
	* 
	W0918 13:19:58.537569    3589 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:19:58.547141    3589 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-400000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (68.510709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-400000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-400000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-400000-m01 --driver=qemu2 : exit status 80 (9.983735791s)

                                                
                                                
-- stdout --
	* [multinode-400000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-400000-m01" primary control-plane node in "multinode-400000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-400000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-400000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-400000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-400000-m02 --driver=qemu2 : exit status 80 (10.097568792s)

                                                
                                                
-- stdout --
	* [multinode-400000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-400000-m02" primary control-plane node in "multinode-400000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-400000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-400000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-400000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-400000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-400000: exit status 83 (78.957125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-400000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-400000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-400000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-400000 -n multinode-400000: exit status 7 (30.820333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-400000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.31s)

                                                
                                    
x
+
TestPreload (10.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-913000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-913000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.861536458s)

                                                
                                                
-- stdout --
	* [test-preload-913000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-913000" primary control-plane node in "test-preload-913000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-913000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:20:19.078523    3642 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:20:19.078653    3642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:20:19.078656    3642 out.go:358] Setting ErrFile to fd 2...
	I0918 13:20:19.078658    3642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:20:19.078784    3642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:20:19.079827    3642 out.go:352] Setting JSON to false
	I0918 13:20:19.096136    3642 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2979,"bootTime":1726687840,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:20:19.096202    3642 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:20:19.103119    3642 out.go:177] * [test-preload-913000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:20:19.110791    3642 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:20:19.110839    3642 notify.go:220] Checking for updates...
	I0918 13:20:19.117926    3642 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:20:19.119502    3642 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:20:19.122888    3642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:20:19.125915    3642 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:20:19.128958    3642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:20:19.132177    3642 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:20:19.132234    3642 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:20:19.136894    3642 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:20:19.143927    3642 start.go:297] selected driver: qemu2
	I0918 13:20:19.143935    3642 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:20:19.143942    3642 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:20:19.146355    3642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:20:19.149911    3642 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:20:19.153028    3642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:20:19.153049    3642 cni.go:84] Creating CNI manager for ""
	I0918 13:20:19.153070    3642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:20:19.153080    3642 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:20:19.153114    3642 start.go:340] cluster config:
	{Name:test-preload-913000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-913000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:20:19.156807    3642 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:20:19.164876    3642 out.go:177] * Starting "test-preload-913000" primary control-plane node in "test-preload-913000" cluster
	I0918 13:20:19.167887    3642 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0918 13:20:19.167973    3642 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/test-preload-913000/config.json ...
	I0918 13:20:19.167992    3642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/test-preload-913000/config.json: {Name:mkf401a54950890ab82447acd42d0af6c55828a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:20:19.167987    3642 cache.go:107] acquiring lock: {Name:mk2002bf3399fa40232b4eb631e8a678521e5416 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:20:19.167989    3642 cache.go:107] acquiring lock: {Name:mk7e1bd637ac442408080d22b27b17505aeecec4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:20:19.168000    3642 cache.go:107] acquiring lock: {Name:mk1af723c864f545028c74165e240d220ba2c77a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:20:19.167985    3642 cache.go:107] acquiring lock: {Name:mk94a5eafb1e7f7f4b53543baf43f57f344fb5ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:20:19.168161    3642 cache.go:107] acquiring lock: {Name:mk2213892b1588c6671ecbc42a61210d2435118d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:20:19.168213    3642 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0918 13:20:19.168223    3642 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0918 13:20:19.168244    3642 cache.go:107] acquiring lock: {Name:mkec3ea7c0bc62b7318b27668f214011f27f783c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:20:19.168284    3642 cache.go:107] acquiring lock: {Name:mk90215fe683bc91c50f8186e9280be15d9c0fa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:20:19.168260    3642 cache.go:107] acquiring lock: {Name:mk445865636dcdaffd9bb366c13fc2361934ec7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:20:19.168296    3642 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0918 13:20:19.168385    3642 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0918 13:20:19.168396    3642 start.go:360] acquireMachinesLock for test-preload-913000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:20:19.168443    3642 start.go:364] duration metric: took 40.916µs to acquireMachinesLock for "test-preload-913000"
	I0918 13:20:19.168453    3642 start.go:93] Provisioning new machine with config: &{Name:test-preload-913000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-913000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:20:19.168505    3642 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:20:19.168506    3642 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:20:19.168598    3642 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0918 13:20:19.168615    3642 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:20:19.169134    3642 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:20:19.172964    3642 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:20:19.177290    3642 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0918 13:20:19.180113    3642 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0918 13:20:19.180130    3642 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:20:19.180159    3642 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:20:19.180311    3642 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:20:19.180455    3642 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0918 13:20:19.180497    3642 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0918 13:20:19.180552    3642 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0918 13:20:19.191732    3642 start.go:159] libmachine.API.Create for "test-preload-913000" (driver="qemu2")
	I0918 13:20:19.191758    3642 client.go:168] LocalClient.Create starting
	I0918 13:20:19.191829    3642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:20:19.191863    3642 main.go:141] libmachine: Decoding PEM data...
	I0918 13:20:19.191872    3642 main.go:141] libmachine: Parsing certificate...
	I0918 13:20:19.191911    3642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:20:19.191934    3642 main.go:141] libmachine: Decoding PEM data...
	I0918 13:20:19.191944    3642 main.go:141] libmachine: Parsing certificate...
	I0918 13:20:19.192290    3642 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:20:19.352990    3642 main.go:141] libmachine: Creating SSH key...
	I0918 13:20:19.422070    3642 main.go:141] libmachine: Creating Disk image...
	I0918 13:20:19.422110    3642 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:20:19.422309    3642 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2
	I0918 13:20:19.431781    3642 main.go:141] libmachine: STDOUT: 
	I0918 13:20:19.431820    3642 main.go:141] libmachine: STDERR: 
	I0918 13:20:19.431896    3642 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2 +20000M
	I0918 13:20:19.441380    3642 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:20:19.441424    3642 main.go:141] libmachine: STDERR: 
	I0918 13:20:19.441483    3642 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2
	I0918 13:20:19.441488    3642 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:20:19.441498    3642 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:20:19.441527    3642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:3c:a2:39:f4:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2
	I0918 13:20:19.443843    3642 main.go:141] libmachine: STDOUT: 
	I0918 13:20:19.443860    3642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:20:19.443882    3642 client.go:171] duration metric: took 252.129041ms to LocalClient.Create
	I0918 13:20:19.585803    3642 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0918 13:20:19.599768    3642 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0918 13:20:19.658754    3642 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0918 13:20:19.699073    3642 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0918 13:20:19.699110    3642 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0918 13:20:19.734754    3642 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0918 13:20:19.738836    3642 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0918 13:20:19.778156    3642 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0918 13:20:19.867258    3642 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0918 13:20:19.867322    3642 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 699.139459ms
	I0918 13:20:19.867361    3642 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0918 13:20:20.074282    3642 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0918 13:20:20.074370    3642 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 13:20:20.716127    3642 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0918 13:20:20.716174    3642 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.547988083s
	I0918 13:20:20.716202    3642 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0918 13:20:20.992197    3642 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0918 13:20:20.992262    3642 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.824351375s
	I0918 13:20:20.992295    3642 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0918 13:20:21.444076    3642 start.go:128] duration metric: took 2.27564025s to createHost
	I0918 13:20:21.444127    3642 start.go:83] releasing machines lock for "test-preload-913000", held for 2.275769042s
	W0918 13:20:21.444179    3642 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:20:21.461474    3642 out.go:177] * Deleting "test-preload-913000" in qemu2 ...
	W0918 13:20:21.492954    3642 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:20:21.493049    3642 start.go:729] Will try again in 5 seconds ...
	I0918 13:20:22.098617    3642 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0918 13:20:22.098667    3642 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.9307825s
	I0918 13:20:22.098712    3642 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0918 13:20:24.033735    3642 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0918 13:20:24.033783    3642 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.866006292s
	I0918 13:20:24.033834    3642 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0918 13:20:24.088850    3642 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0918 13:20:24.088887    3642 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.921105792s
	I0918 13:20:24.088913    3642 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0918 13:20:24.813140    3642 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0918 13:20:24.813191    3642 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.645254625s
	I0918 13:20:24.813218    3642 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0918 13:20:26.493075    3642 start.go:360] acquireMachinesLock for test-preload-913000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:20:26.493345    3642 start.go:364] duration metric: took 208.792µs to acquireMachinesLock for "test-preload-913000"
	I0918 13:20:26.493421    3642 start.go:93] Provisioning new machine with config: &{Name:test-preload-913000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-913000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:20:26.493566    3642 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:20:26.506423    3642 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:20:26.550503    3642 start.go:159] libmachine.API.Create for "test-preload-913000" (driver="qemu2")
	I0918 13:20:26.550606    3642 client.go:168] LocalClient.Create starting
	I0918 13:20:26.550755    3642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:20:26.550828    3642 main.go:141] libmachine: Decoding PEM data...
	I0918 13:20:26.550851    3642 main.go:141] libmachine: Parsing certificate...
	I0918 13:20:26.550924    3642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:20:26.550985    3642 main.go:141] libmachine: Decoding PEM data...
	I0918 13:20:26.551004    3642 main.go:141] libmachine: Parsing certificate...
	I0918 13:20:26.551569    3642 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:20:26.735911    3642 main.go:141] libmachine: Creating SSH key...
	I0918 13:20:26.837866    3642 main.go:141] libmachine: Creating Disk image...
	I0918 13:20:26.837872    3642 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:20:26.838039    3642 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2
	I0918 13:20:26.847892    3642 main.go:141] libmachine: STDOUT: 
	I0918 13:20:26.847907    3642 main.go:141] libmachine: STDERR: 
	I0918 13:20:26.847963    3642 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2 +20000M
	I0918 13:20:26.856124    3642 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:20:26.856148    3642 main.go:141] libmachine: STDERR: 
	I0918 13:20:26.856159    3642 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2
	I0918 13:20:26.856171    3642 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:20:26.856178    3642 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:20:26.856214    3642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:2e:c0:df:d7:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/test-preload-913000/disk.qcow2
	I0918 13:20:26.858006    3642 main.go:141] libmachine: STDOUT: 
	I0918 13:20:26.858020    3642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:20:26.858032    3642 client.go:171] duration metric: took 307.43275ms to LocalClient.Create
	I0918 13:20:28.927006    3642 start.go:128] duration metric: took 2.364728209s to createHost
	I0918 13:20:28.927067    3642 start.go:83] releasing machines lock for "test-preload-913000", held for 2.365049333s
	W0918 13:20:28.927364    3642 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-913000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-913000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:20:28.944082    3642 out.go:201] 
	W0918 13:20:28.946185    3642 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:20:28.946213    3642 out.go:270] * 
	* 
	W0918 13:20:28.948637    3642 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:20:28.964973    3642 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-913000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-18 13:20:28.983582 -0700 PDT m=+2600.068976460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-913000 -n test-preload-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-913000 -n test-preload-913000: exit status 7 (66.406208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-913000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-913000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-913000
--- FAIL: TestPreload (10.01s)

                                                
                                    
x
+
TestScheduledStopUnix (9.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-962000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-962000 --memory=2048 --driver=qemu2 : exit status 80 (9.779238417s)

                                                
                                                
-- stdout --
	* [scheduled-stop-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-962000" primary control-plane node in "scheduled-stop-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-962000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-962000" primary control-plane node in "scheduled-stop-962000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-962000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-962000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-18 13:20:38.912795 -0700 PDT m=+2609.998438251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-962000 -n scheduled-stop-962000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-962000 -n scheduled-stop-962000: exit status 7 (68.805375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-962000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-962000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-962000
--- FAIL: TestScheduledStopUnix (9.93s)

                                                
                                    
x
+
TestSkaffold (12.36s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe746134337 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe746134337 version: (1.067383791s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-127000 --memory=2600 --driver=qemu2 
E0918 13:20:50.369581    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-127000 --memory=2600 --driver=qemu2 : exit status 80 (9.95725925s)

                                                
                                                
-- stdout --
	* [skaffold-127000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-127000" primary control-plane node in "skaffold-127000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-127000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-127000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-127000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-127000" primary control-plane node in "skaffold-127000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-127000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-127000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-18 13:20:51.275049 -0700 PDT m=+2622.361008876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-127000 -n skaffold-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-127000 -n skaffold-127000: exit status 7 (63.436333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-127000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-127000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-127000
--- FAIL: TestSkaffold (12.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (610.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.599220923 start -p running-upgrade-314000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.599220923 start -p running-upgrade-314000 --memory=2200 --vm-driver=qemu2 : (1m1.514887583s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-314000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-314000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m33.70748125s)

                                                
                                                
-- stdout --
	* [running-upgrade-314000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-314000" primary control-plane node in "running-upgrade-314000" cluster
	* Updating the running qemu2 "running-upgrade-314000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:22:32.174614    3941 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:22:32.174752    3941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:22:32.174755    3941 out.go:358] Setting ErrFile to fd 2...
	I0918 13:22:32.174758    3941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:22:32.174872    3941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:22:32.175904    3941 out.go:352] Setting JSON to false
	I0918 13:22:32.192721    3941 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3111,"bootTime":1726687841,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:22:32.192794    3941 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:22:32.197656    3941 out.go:177] * [running-upgrade-314000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:22:32.205510    3941 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:22:32.205533    3941 notify.go:220] Checking for updates...
	I0918 13:22:32.213608    3941 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:22:32.217636    3941 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:22:32.220594    3941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:22:32.223569    3941 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:22:32.226566    3941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:22:32.229908    3941 config.go:182] Loaded profile config "running-upgrade-314000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0918 13:22:32.233602    3941 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 13:22:32.236623    3941 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:22:32.241734    3941 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:22:32.248554    3941 start.go:297] selected driver: qemu2
	I0918 13:22:32.248563    3941 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0918 13:22:32.248628    3941 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:22:32.251066    3941 cni.go:84] Creating CNI manager for ""
	I0918 13:22:32.251116    3941 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:22:32.251141    3941 start.go:340] cluster config:
	{Name:running-upgrade-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0918 13:22:32.251195    3941 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:22:32.258544    3941 out.go:177] * Starting "running-upgrade-314000" primary control-plane node in "running-upgrade-314000" cluster
	I0918 13:22:32.262560    3941 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0918 13:22:32.262594    3941 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0918 13:22:32.262603    3941 cache.go:56] Caching tarball of preloaded images
	I0918 13:22:32.262688    3941 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:22:32.262694    3941 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0918 13:22:32.262752    3941 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/config.json ...
	I0918 13:22:32.263157    3941 start.go:360] acquireMachinesLock for running-upgrade-314000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:22:32.263193    3941 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "running-upgrade-314000"
	I0918 13:22:32.263202    3941 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:22:32.263209    3941 fix.go:54] fixHost starting: 
	I0918 13:22:32.263895    3941 fix.go:112] recreateIfNeeded on running-upgrade-314000: state=Running err=<nil>
	W0918 13:22:32.263904    3941 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:22:32.267589    3941 out.go:177] * Updating the running qemu2 "running-upgrade-314000" VM ...
	I0918 13:22:32.275623    3941 machine.go:93] provisionDockerMachine start ...
	I0918 13:22:32.275694    3941 main.go:141] libmachine: Using SSH client type: native
	I0918 13:22:32.275852    3941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104821190] 0x1048239d0 <nil>  [] 0s} localhost 50220 <nil> <nil>}
	I0918 13:22:32.275857    3941 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 13:22:32.330027    3941 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-314000
	
	I0918 13:22:32.330040    3941 buildroot.go:166] provisioning hostname "running-upgrade-314000"
	I0918 13:22:32.330098    3941 main.go:141] libmachine: Using SSH client type: native
	I0918 13:22:32.330213    3941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104821190] 0x1048239d0 <nil>  [] 0s} localhost 50220 <nil> <nil>}
	I0918 13:22:32.330219    3941 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-314000 && echo "running-upgrade-314000" | sudo tee /etc/hostname
	I0918 13:22:32.389339    3941 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-314000
	
	I0918 13:22:32.389392    3941 main.go:141] libmachine: Using SSH client type: native
	I0918 13:22:32.389498    3941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104821190] 0x1048239d0 <nil>  [] 0s} localhost 50220 <nil> <nil>}
	I0918 13:22:32.389508    3941 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-314000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-314000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-314000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 13:22:32.443914    3941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 13:22:32.443924    3941 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19667-1040/.minikube CaCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19667-1040/.minikube}
	I0918 13:22:32.443934    3941 buildroot.go:174] setting up certificates
	I0918 13:22:32.443939    3941 provision.go:84] configureAuth start
	I0918 13:22:32.443946    3941 provision.go:143] copyHostCerts
	I0918 13:22:32.444002    3941 exec_runner.go:144] found /Users/jenkins/minikube-integration/19667-1040/.minikube/cert.pem, removing ...
	I0918 13:22:32.444013    3941 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19667-1040/.minikube/cert.pem
	I0918 13:22:32.444149    3941 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/cert.pem (1123 bytes)
	I0918 13:22:32.444333    3941 exec_runner.go:144] found /Users/jenkins/minikube-integration/19667-1040/.minikube/key.pem, removing ...
	I0918 13:22:32.444336    3941 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19667-1040/.minikube/key.pem
	I0918 13:22:32.444392    3941 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/key.pem (1679 bytes)
	I0918 13:22:32.444525    3941 exec_runner.go:144] found /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.pem, removing ...
	I0918 13:22:32.444528    3941 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.pem
	I0918 13:22:32.444582    3941 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.pem (1082 bytes)
	I0918 13:22:32.444671    3941 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-314000 san=[127.0.0.1 localhost minikube running-upgrade-314000]
	I0918 13:22:32.566820    3941 provision.go:177] copyRemoteCerts
	I0918 13:22:32.566871    3941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 13:22:32.566880    3941 sshutil.go:53] new ssh client: &{IP:localhost Port:50220 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/running-upgrade-314000/id_rsa Username:docker}
	I0918 13:22:32.598465    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 13:22:32.605156    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 13:22:32.612088    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 13:22:32.618855    3941 provision.go:87] duration metric: took 174.910333ms to configureAuth
	I0918 13:22:32.618864    3941 buildroot.go:189] setting minikube options for container-runtime
	I0918 13:22:32.618966    3941 config.go:182] Loaded profile config "running-upgrade-314000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0918 13:22:32.619006    3941 main.go:141] libmachine: Using SSH client type: native
	I0918 13:22:32.619098    3941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104821190] 0x1048239d0 <nil>  [] 0s} localhost 50220 <nil> <nil>}
	I0918 13:22:32.619103    3941 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 13:22:32.675916    3941 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0918 13:22:32.675924    3941 buildroot.go:70] root file system type: tmpfs
	I0918 13:22:32.675971    3941 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 13:22:32.676023    3941 main.go:141] libmachine: Using SSH client type: native
	I0918 13:22:32.676129    3941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104821190] 0x1048239d0 <nil>  [] 0s} localhost 50220 <nil> <nil>}
	I0918 13:22:32.676162    3941 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 13:22:32.732453    3941 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 13:22:32.732526    3941 main.go:141] libmachine: Using SSH client type: native
	I0918 13:22:32.732654    3941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104821190] 0x1048239d0 <nil>  [] 0s} localhost 50220 <nil> <nil>}
	I0918 13:22:32.732663    3941 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 13:22:32.790057    3941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 13:22:32.790069    3941 machine.go:96] duration metric: took 514.453042ms to provisionDockerMachine
	I0918 13:22:32.790075    3941 start.go:293] postStartSetup for "running-upgrade-314000" (driver="qemu2")
	I0918 13:22:32.790080    3941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 13:22:32.790144    3941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 13:22:32.790157    3941 sshutil.go:53] new ssh client: &{IP:localhost Port:50220 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/running-upgrade-314000/id_rsa Username:docker}
	I0918 13:22:32.819104    3941 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 13:22:32.820464    3941 info.go:137] Remote host: Buildroot 2021.02.12
	I0918 13:22:32.820472    3941 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19667-1040/.minikube/addons for local assets ...
	I0918 13:22:32.820572    3941 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19667-1040/.minikube/files for local assets ...
	I0918 13:22:32.820698    3941 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem -> 15162.pem in /etc/ssl/certs
	I0918 13:22:32.820821    3941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 13:22:32.823390    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem --> /etc/ssl/certs/15162.pem (1708 bytes)
	I0918 13:22:32.830037    3941 start.go:296] duration metric: took 39.958625ms for postStartSetup
	I0918 13:22:32.830051    3941 fix.go:56] duration metric: took 566.861833ms for fixHost
	I0918 13:22:32.830101    3941 main.go:141] libmachine: Using SSH client type: native
	I0918 13:22:32.830207    3941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104821190] 0x1048239d0 <nil>  [] 0s} localhost 50220 <nil> <nil>}
	I0918 13:22:32.830211    3941 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 13:22:32.884749    3941 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726690953.030588528
	
	I0918 13:22:32.884761    3941 fix.go:216] guest clock: 1726690953.030588528
	I0918 13:22:32.884765    3941 fix.go:229] Guest: 2024-09-18 13:22:33.030588528 -0700 PDT Remote: 2024-09-18 13:22:32.830053 -0700 PDT m=+0.676039918 (delta=200.535528ms)
	I0918 13:22:32.884777    3941 fix.go:200] guest clock delta is within tolerance: 200.535528ms
	I0918 13:22:32.884781    3941 start.go:83] releasing machines lock for "running-upgrade-314000", held for 621.60025ms
	I0918 13:22:32.884855    3941 ssh_runner.go:195] Run: cat /version.json
	I0918 13:22:32.884867    3941 sshutil.go:53] new ssh client: &{IP:localhost Port:50220 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/running-upgrade-314000/id_rsa Username:docker}
	I0918 13:22:32.884889    3941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 13:22:32.884911    3941 sshutil.go:53] new ssh client: &{IP:localhost Port:50220 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/running-upgrade-314000/id_rsa Username:docker}
	W0918 13:22:32.885529    3941 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50220: connect: connection refused
	I0918 13:22:32.885554    3941 retry.go:31] will retry after 176.240808ms: dial tcp [::1]:50220: connect: connection refused
	W0918 13:22:32.912440    3941 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0918 13:22:32.912510    3941 ssh_runner.go:195] Run: systemctl --version
	I0918 13:22:32.914432    3941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 13:22:32.916225    3941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 13:22:32.916254    3941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0918 13:22:32.918920    3941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0918 13:22:32.923657    3941 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 13:22:32.923664    3941 start.go:495] detecting cgroup driver to use...
	I0918 13:22:32.923725    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 13:22:32.928654    3941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0918 13:22:32.931668    3941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 13:22:32.934778    3941 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 13:22:32.934808    3941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 13:22:32.938187    3941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 13:22:32.941883    3941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 13:22:32.945210    3941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 13:22:32.948361    3941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 13:22:32.951976    3941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 13:22:32.954974    3941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0918 13:22:32.958551    3941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0918 13:22:32.962436    3941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 13:22:32.965921    3941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 13:22:32.968998    3941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:22:33.052515    3941 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 13:22:33.064210    3941 start.go:495] detecting cgroup driver to use...
	I0918 13:22:33.064291    3941 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 13:22:33.071257    3941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 13:22:33.077602    3941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 13:22:33.085970    3941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 13:22:33.091952    3941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 13:22:33.099631    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 13:22:33.144337    3941 ssh_runner.go:195] Run: which cri-dockerd
	I0918 13:22:33.145717    3941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 13:22:33.149640    3941 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0918 13:22:33.155253    3941 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 13:22:33.244084    3941 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 13:22:33.342967    3941 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 13:22:33.343025    3941 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0918 13:22:33.348585    3941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:22:33.454865    3941 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 13:22:46.198782    3941 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.744233042s)
	I0918 13:22:46.198859    3941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0918 13:22:46.203740    3941 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0918 13:22:46.210391    3941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 13:22:46.215352    3941 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 13:22:46.306350    3941 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 13:22:46.387617    3941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:22:46.467040    3941 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 13:22:46.473824    3941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 13:22:46.478979    3941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:22:46.542178    3941 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0918 13:22:46.582193    3941 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 13:22:46.582291    3941 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 13:22:46.584421    3941 start.go:563] Will wait 60s for crictl version
	I0918 13:22:46.584496    3941 ssh_runner.go:195] Run: which crictl
	I0918 13:22:46.586926    3941 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 13:22:46.598598    3941 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0918 13:22:46.598673    3941 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 13:22:46.611529    3941 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 13:22:46.630950    3941 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0918 13:22:46.631026    3941 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0918 13:22:46.632569    3941 kubeadm.go:883] updating cluster {Name:running-upgrade-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0918 13:22:46.632613    3941 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0918 13:22:46.632657    3941 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 13:22:46.643174    3941 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 13:22:46.643182    3941 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0918 13:22:46.643231    3941 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 13:22:46.646371    3941 ssh_runner.go:195] Run: which lz4
	I0918 13:22:46.647645    3941 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 13:22:46.648918    3941 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 13:22:46.648929    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0918 13:22:47.583095    3941 docker.go:649] duration metric: took 935.51575ms to copy over tarball
	I0918 13:22:47.583176    3941 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 13:22:48.840738    3941 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.257581833s)
	I0918 13:22:48.840752    3941 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 13:22:48.857218    3941 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 13:22:48.860383    3941 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0918 13:22:48.865682    3941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:22:48.954432    3941 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 13:22:50.174992    3941 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.220577125s)
	I0918 13:22:50.175107    3941 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 13:22:50.186297    3941 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 13:22:50.186306    3941 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0918 13:22:50.186312    3941 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 13:22:50.190149    3941 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:22:50.193077    3941 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:22:50.195970    3941 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:22:50.196045    3941 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:22:50.198612    3941 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:22:50.198610    3941 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:22:50.200027    3941 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:22:50.200135    3941 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:22:50.201114    3941 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:22:50.201434    3941 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:22:50.202346    3941 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0918 13:22:50.202488    3941 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:22:50.203531    3941 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:22:50.203878    3941 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:22:50.204335    3941 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0918 13:22:50.205610    3941 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:22:50.524276    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:22:50.535848    3941 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0918 13:22:50.535874    3941 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:22:50.535946    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:22:50.546420    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0918 13:22:50.594383    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0918 13:22:50.605451    3941 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0918 13:22:50.605470    3941 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:22:50.605529    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0918 13:22:50.617076    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0918 13:22:50.621993    3941 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0918 13:22:50.622121    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:22:50.633073    3941 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0918 13:22:50.633102    3941 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:22:50.633182    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:22:50.636242    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:22:50.647281    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0918 13:22:50.647421    3941 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0918 13:22:50.652675    3941 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0918 13:22:50.652689    3941 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0918 13:22:50.652700    3941 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:22:50.652712    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0918 13:22:50.652762    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:22:50.663086    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:22:50.674544    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0918 13:22:50.676207    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0918 13:22:50.689603    3941 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0918 13:22:50.689627    3941 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:22:50.689690    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:22:50.701384    3941 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0918 13:22:50.701410    3941 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0918 13:22:50.701489    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0918 13:22:50.722614    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:22:50.724844    3941 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0918 13:22:50.724855    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0918 13:22:50.734905    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0918 13:22:50.734936    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0918 13:22:50.735065    3941 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0918 13:22:50.745693    3941 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0918 13:22:50.745718    3941 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:22:50.745784    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:22:50.785209    3941 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0918 13:22:50.785237    3941 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0918 13:22:50.785255    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0918 13:22:50.785259    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0918 13:22:50.793451    3941 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0918 13:22:50.793460    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0918 13:22:50.818246    3941 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0918 13:22:51.068628    3941 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0918 13:22:51.069310    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:22:51.109508    3941 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0918 13:22:51.109569    3941 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:22:51.109738    3941 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:22:52.594311    3941 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.484573542s)
	I0918 13:22:52.594353    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 13:22:52.594935    3941 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0918 13:22:52.600231    3941 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0918 13:22:52.600287    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0918 13:22:52.650969    3941 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0918 13:22:52.650994    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0918 13:22:52.900688    3941 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0918 13:22:52.900725    3941 cache_images.go:92] duration metric: took 2.714475417s to LoadCachedImages
	W0918 13:22:52.900763    3941 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0918 13:22:52.900768    3941 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0918 13:22:52.900824    3941 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-314000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 13:22:52.900909    3941 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 13:22:52.913919    3941 cni.go:84] Creating CNI manager for ""
	I0918 13:22:52.913944    3941 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:22:52.913954    3941 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 13:22:52.913966    3941 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-314000 NodeName:running-upgrade-314000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 13:22:52.914037    3941 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-314000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 13:22:52.914108    3941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0918 13:22:52.917711    3941 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 13:22:52.917755    3941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 13:22:52.921196    3941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0918 13:22:52.926101    3941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 13:22:52.931579    3941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0918 13:22:52.936458    3941 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0918 13:22:52.937856    3941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:22:53.027793    3941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 13:22:53.032585    3941 certs.go:68] Setting up /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000 for IP: 10.0.2.15
	I0918 13:22:53.032594    3941 certs.go:194] generating shared ca certs ...
	I0918 13:22:53.032606    3941 certs.go:226] acquiring lock for ca certs: {Name:mk6bf733e3b7a8269fa0cc74c7cf113ceab149df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:22:53.032773    3941 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key
	I0918 13:22:53.032821    3941 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key
	I0918 13:22:53.032828    3941 certs.go:256] generating profile certs ...
	I0918 13:22:53.032922    3941 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/client.key
	I0918 13:22:53.032941    3941 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key.c6930ede
	I0918 13:22:53.032950    3941 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt.c6930ede with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0918 13:22:53.107209    3941 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt.c6930ede ...
	I0918 13:22:53.107216    3941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt.c6930ede: {Name:mk9a4ddd13893e646499520f9e37a03e12f5d465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:22:53.107636    3941 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key.c6930ede ...
	I0918 13:22:53.107641    3941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key.c6930ede: {Name:mk424950dbf89558b44cb97b1c982ae4f8f49cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:22:53.107798    3941 certs.go:381] copying /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt.c6930ede -> /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt
	I0918 13:22:53.107941    3941 certs.go:385] copying /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key.c6930ede -> /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key
	I0918 13:22:53.108088    3941 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/proxy-client.key
	I0918 13:22:53.108224    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516.pem (1338 bytes)
	W0918 13:22:53.108252    3941 certs.go:480] ignoring /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516_empty.pem, impossibly tiny 0 bytes
	I0918 13:22:53.108257    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 13:22:53.108283    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem (1082 bytes)
	I0918 13:22:53.108310    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem (1123 bytes)
	I0918 13:22:53.108335    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem (1679 bytes)
	I0918 13:22:53.108388    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem (1708 bytes)
	I0918 13:22:53.108706    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 13:22:53.116408    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 13:22:53.123692    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 13:22:53.130788    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 13:22:53.137792    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 13:22:53.144929    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 13:22:53.151706    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 13:22:53.158237    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 13:22:53.165532    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516.pem --> /usr/share/ca-certificates/1516.pem (1338 bytes)
	I0918 13:22:53.173147    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem --> /usr/share/ca-certificates/15162.pem (1708 bytes)
	I0918 13:22:53.180296    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 13:22:53.186992    3941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 13:22:53.192280    3941 ssh_runner.go:195] Run: openssl version
	I0918 13:22:53.194072    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 13:22:53.197549    3941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:22:53.199335    3941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:22:53.199362    3941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:22:53.201364    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 13:22:53.204104    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1516.pem && ln -fs /usr/share/ca-certificates/1516.pem /etc/ssl/certs/1516.pem"
	I0918 13:22:53.207122    3941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1516.pem
	I0918 13:22:53.208633    3941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:53 /usr/share/ca-certificates/1516.pem
	I0918 13:22:53.208657    3941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1516.pem
	I0918 13:22:53.210629    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1516.pem /etc/ssl/certs/51391683.0"
	I0918 13:22:53.213822    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15162.pem && ln -fs /usr/share/ca-certificates/15162.pem /etc/ssl/certs/15162.pem"
	I0918 13:22:53.217405    3941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15162.pem
	I0918 13:22:53.219136    3941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:53 /usr/share/ca-certificates/15162.pem
	I0918 13:22:53.219164    3941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15162.pem
	I0918 13:22:53.221013    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15162.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 13:22:53.223856    3941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 13:22:53.225422    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 13:22:53.227536    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 13:22:53.229377    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 13:22:53.231242    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 13:22:53.233036    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 13:22:53.234780    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 13:22:53.236574    3941 kubeadm.go:392] StartCluster: {Name:running-upgrade-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0918 13:22:53.236645    3941 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 13:22:53.246851    3941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 13:22:53.250363    3941 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 13:22:53.250371    3941 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 13:22:53.250397    3941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 13:22:53.254161    3941 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:22:53.254404    3941 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-314000" does not appear in /Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:22:53.254455    3941 kubeconfig.go:62] /Users/jenkins/minikube-integration/19667-1040/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-314000" cluster setting kubeconfig missing "running-upgrade-314000" context setting]
	I0918 13:22:53.254591    3941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/kubeconfig: {Name:mkc39e19086c32e3258f75506afcbcc582926b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:22:53.256388    3941 kapi.go:59] client config for running-upgrade-314000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/client.key", CAFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105df9800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 13:22:53.256723    3941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 13:22:53.259571    3941 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-314000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0918 13:22:53.259576    3941 kubeadm.go:1160] stopping kube-system containers ...
	I0918 13:22:53.259626    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 13:22:53.270603    3941 docker.go:483] Stopping containers: [dffd1cd50f36 ceef4d344b48 e22e86a4ce24 6b1ac7de9044 48327b73c9bd 7e9f605d25c5 6632cd6218f3 b055a7066d86 ab5a367ffd08 cb6295d7aef9 b9a4e1994b07 9555ba0e451f 627ec8a706ce 1e9a779de08e]
	I0918 13:22:53.270673    3941 ssh_runner.go:195] Run: docker stop dffd1cd50f36 ceef4d344b48 e22e86a4ce24 6b1ac7de9044 48327b73c9bd 7e9f605d25c5 6632cd6218f3 b055a7066d86 ab5a367ffd08 cb6295d7aef9 b9a4e1994b07 9555ba0e451f 627ec8a706ce 1e9a779de08e
	I0918 13:22:53.281642    3941 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 13:22:53.383573    3941 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 13:22:53.388447    3941 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 18 20:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep 18 20:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 18 20:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 18 20:22 /etc/kubernetes/scheduler.conf
	
	I0918 13:22:53.388489    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/admin.conf
	I0918 13:22:53.392261    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:22:53.392290    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 13:22:53.395819    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/kubelet.conf
	I0918 13:22:53.398873    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:22:53.398896    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 13:22:53.402137    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/controller-manager.conf
	I0918 13:22:53.405481    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:22:53.405504    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 13:22:53.408759    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/scheduler.conf
	I0918 13:22:53.411557    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:22:53.411587    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 13:22:53.414248    3941 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 13:22:53.417454    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:22:53.441328    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:22:53.959122    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:22:54.170859    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:22:54.194174    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:22:54.214554    3941 api_server.go:52] waiting for apiserver process to appear ...
	I0918 13:22:54.214644    3941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:22:54.717024    3941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:22:55.216716    3941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:22:55.221064    3941 api_server.go:72] duration metric: took 1.006538916s to wait for apiserver process to appear ...
	I0918 13:22:55.221074    3941 api_server.go:88] waiting for apiserver healthz status ...
	I0918 13:22:55.221084    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:00.223017    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:00.223041    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:05.223122    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:05.223159    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:10.223581    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:10.223677    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:15.224367    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:15.224390    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:20.224939    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:20.224957    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:25.225640    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:25.225682    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:30.226715    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:30.226756    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:35.228129    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:35.228154    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:40.229787    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:40.229823    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:45.231990    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:45.232031    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:50.234222    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:50.234277    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:55.236044    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:55.236169    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:23:55.248125    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:23:55.248227    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:23:55.258655    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:23:55.258732    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:23:55.269332    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:23:55.269419    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:23:55.279596    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:23:55.279684    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:23:55.290090    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:23:55.290172    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:23:55.303206    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:23:55.303282    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:23:55.313239    3941 logs.go:276] 0 containers: []
	W0918 13:23:55.313249    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:23:55.313310    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:23:55.325544    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:23:55.325572    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:23:55.325582    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:23:55.402809    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:23:55.402820    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:23:55.414403    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:23:55.414418    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:23:55.427351    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:23:55.427359    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:23:55.441220    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:23:55.441230    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:23:55.452411    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:23:55.452423    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:23:55.468410    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:23:55.468420    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:23:55.480057    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:23:55.480072    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:23:55.506915    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:23:55.506923    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:23:55.544390    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:23:55.544399    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:23:55.557567    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:23:55.557578    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:23:55.575597    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:23:55.575609    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:23:55.586846    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:23:55.586862    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:23:55.598107    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:23:55.598118    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:23:55.602481    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:23:55.602488    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:23:55.616606    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:23:55.616617    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:23:55.633829    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:23:55.633841    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:23:58.150808    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:03.152952    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:03.153203    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:03.176664    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:03.176825    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:03.194886    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:03.194973    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:03.207887    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:03.207974    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:03.223187    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:03.223266    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:03.233558    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:03.233665    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:03.243916    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:03.243991    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:03.253924    3941 logs.go:276] 0 containers: []
	W0918 13:24:03.253939    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:03.254002    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:03.264826    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:03.264843    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:03.264856    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:03.279235    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:03.279246    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:03.297895    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:03.297906    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:03.309651    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:03.309660    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:03.323860    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:03.323875    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:03.338180    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:03.338190    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:03.362768    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:03.362778    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:03.398720    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:03.398727    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:03.435501    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:03.435511    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:03.453416    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:03.453428    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:03.471119    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:03.471129    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:03.482215    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:03.482225    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:03.500194    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:03.500205    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:03.514820    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:03.514829    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:03.531391    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:03.531403    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:03.543991    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:03.544002    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:03.548739    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:03.548745    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:06.062993    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:11.065635    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:11.065953    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:11.094235    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:11.094390    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:11.111704    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:11.111806    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:11.125296    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:11.125385    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:11.137188    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:11.137274    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:11.147604    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:11.147692    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:11.158476    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:11.158555    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:11.168766    3941 logs.go:276] 0 containers: []
	W0918 13:24:11.168780    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:11.168846    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:11.179153    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:11.179172    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:11.179177    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:11.204499    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:11.204508    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:11.215863    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:11.215874    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:11.231204    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:11.231215    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:11.245753    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:11.245764    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:11.259629    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:11.259642    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:11.272045    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:11.272056    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:11.286130    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:11.286141    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:11.300848    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:11.300858    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:11.316909    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:11.316920    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:11.334005    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:11.334015    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:11.345488    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:11.345498    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:11.349828    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:11.349835    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:11.361262    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:11.361272    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:11.397047    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:11.397057    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:11.409116    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:11.409127    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:11.420157    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:11.420170    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:13.958178    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:18.960444    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:18.960612    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:18.976798    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:18.976903    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:18.989577    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:18.989664    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:19.000933    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:19.001022    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:19.011617    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:19.011698    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:19.021998    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:19.022081    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:19.032889    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:19.032979    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:19.044721    3941 logs.go:276] 0 containers: []
	W0918 13:24:19.044735    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:19.044819    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:19.057104    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:19.057124    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:19.057129    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:19.069256    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:19.069270    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:19.081103    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:19.081114    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:19.106220    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:19.106227    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:19.125379    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:19.125392    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:19.161370    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:19.161384    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:19.174213    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:19.174227    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:19.189064    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:19.189076    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:19.201876    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:19.201887    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:19.222173    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:19.222187    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:19.233574    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:19.233585    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:19.245303    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:19.245317    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:19.249878    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:19.249885    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:19.263441    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:19.263452    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:19.277693    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:19.277703    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:19.289587    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:19.289597    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:19.301330    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:19.301343    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:21.841412    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:26.843491    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:26.843689    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:26.862528    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:26.862650    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:26.876746    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:26.876831    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:26.888440    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:26.888526    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:26.899444    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:26.899534    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:26.912662    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:26.912739    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:26.925257    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:26.925345    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:26.935410    3941 logs.go:276] 0 containers: []
	W0918 13:24:26.935422    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:26.935500    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:26.951393    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:26.951414    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:26.951420    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:26.965865    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:26.965875    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:26.979004    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:26.979020    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:26.996224    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:26.996237    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:27.008493    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:27.008503    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:27.022713    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:27.022722    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:27.033938    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:27.033949    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:27.045627    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:27.045639    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:27.050103    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:27.050112    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:27.061376    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:27.061389    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:27.075170    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:27.075181    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:27.087849    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:27.087866    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:27.114686    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:27.114699    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:27.126032    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:27.126048    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:27.137710    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:27.137722    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:27.164089    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:27.164096    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:27.202201    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:27.202213    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:29.746138    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:34.748187    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:34.748374    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:34.762053    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:34.762155    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:34.773379    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:34.773471    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:34.784068    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:34.784159    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:34.794704    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:34.794790    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:34.807906    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:34.807990    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:34.818745    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:34.818831    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:34.828546    3941 logs.go:276] 0 containers: []
	W0918 13:24:34.828559    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:34.828627    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:34.838793    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:34.838812    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:34.838818    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:34.843047    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:34.843055    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:34.856362    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:34.856371    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:34.870604    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:34.870614    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:34.885786    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:34.885796    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:34.896999    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:34.897014    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:34.923079    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:34.923090    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:34.936684    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:34.936695    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:34.947822    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:34.947833    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:34.968961    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:34.968976    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:34.986792    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:34.986808    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:34.998785    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:34.998799    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:35.037608    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:35.037624    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:35.050625    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:35.050639    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:35.062387    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:35.062400    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:35.073755    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:35.073769    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:35.085491    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:35.085504    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:37.625474    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:42.627639    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:42.627964    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:42.660591    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:42.660742    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:42.677935    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:42.678027    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:42.690907    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:42.690999    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:42.702672    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:42.702756    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:42.713414    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:42.713491    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:42.724052    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:42.724159    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:42.738485    3941 logs.go:276] 0 containers: []
	W0918 13:24:42.738495    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:42.738557    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:42.749731    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:42.749750    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:42.749755    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:42.762248    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:42.762259    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:42.773603    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:42.773613    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:42.797919    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:42.797929    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:42.812595    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:42.812611    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:42.826860    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:42.826874    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:42.838281    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:42.838292    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:42.853292    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:42.853310    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:42.864742    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:42.864756    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:42.876703    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:42.876715    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:42.911112    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:42.911124    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:42.924180    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:42.924190    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:42.942488    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:42.942502    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:42.956792    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:42.956805    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:42.961614    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:42.961622    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:42.979249    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:42.979262    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:42.991111    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:42.991121    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:45.529275    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:50.531469    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:50.532039    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:50.572474    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:50.572641    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:50.595323    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:50.595452    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:50.610799    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:50.610887    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:50.623459    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:50.623552    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:50.634352    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:50.634433    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:50.646895    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:50.646979    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:50.657383    3941 logs.go:276] 0 containers: []
	W0918 13:24:50.657393    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:50.657463    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:50.672159    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:50.672177    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:50.672182    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:50.687208    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:50.687221    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:50.699228    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:50.699238    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:50.710661    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:50.710673    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:50.722805    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:50.722818    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:50.759169    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:50.759183    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:50.794138    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:50.794152    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:50.808310    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:50.808322    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:50.820936    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:50.820946    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:50.847242    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:50.847256    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:50.851730    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:50.851739    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:50.871408    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:50.871419    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:50.882589    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:50.882602    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:50.894699    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:50.894709    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:50.912036    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:50.912046    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:50.924149    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:50.924159    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:50.935247    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:50.935258    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:53.458554    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:58.460612    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:58.460777    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:58.472293    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:58.472387    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:58.485738    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:58.485815    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:58.500229    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:58.500305    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:58.510475    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:58.510567    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:58.520900    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:58.520992    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:58.531037    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:58.531122    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:58.541122    3941 logs.go:276] 0 containers: []
	W0918 13:24:58.541139    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:58.541215    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:58.551638    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:58.551655    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:58.551660    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:58.588016    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:58.588028    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:58.602201    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:58.602209    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:58.617806    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:58.617820    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:58.629613    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:58.629623    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:58.653456    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:58.653464    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:58.690772    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:58.690780    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:58.702832    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:58.702845    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:58.721983    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:58.721999    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:58.733425    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:58.733439    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:58.746456    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:58.746467    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:58.763135    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:58.763145    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:58.775216    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:58.775231    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:58.786538    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:58.786551    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:58.798683    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:58.798698    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:58.802870    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:58.802876    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:58.816539    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:58.816550    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:01.329873    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:06.331924    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:06.332113    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:06.350583    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:06.350693    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:06.364698    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:06.364818    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:06.377038    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:06.377129    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:06.388059    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:06.388141    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:06.398503    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:06.398585    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:06.408880    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:06.408950    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:06.419662    3941 logs.go:276] 0 containers: []
	W0918 13:25:06.419674    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:06.419731    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:06.430447    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:06.430465    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:06.430470    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:06.435020    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:06.435027    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:06.449449    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:06.449459    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:06.461814    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:06.461829    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:06.497688    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:06.497699    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:06.513739    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:06.513752    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:06.529863    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:06.529873    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:06.548426    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:06.548436    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:06.566083    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:06.566092    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:06.577628    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:06.577641    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:06.602474    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:06.602488    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:06.640303    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:06.640313    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:06.653430    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:06.653441    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:06.665719    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:06.665730    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:06.677829    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:06.677842    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:06.689606    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:06.689622    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:06.701589    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:06.701599    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:09.215317    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:14.217359    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:14.217470    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:14.229469    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:14.229560    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:14.241016    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:14.241098    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:14.251785    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:14.251864    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:14.262992    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:14.263071    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:14.279117    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:14.279200    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:14.290137    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:14.290212    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:14.307484    3941 logs.go:276] 0 containers: []
	W0918 13:25:14.307499    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:14.307579    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:14.319367    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:14.319392    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:14.319397    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:14.332062    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:14.332077    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:14.347038    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:14.347049    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:14.358873    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:14.358887    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:14.371235    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:14.371246    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:14.387120    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:14.387133    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:14.412012    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:14.412030    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:14.449705    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:14.449719    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:14.454377    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:14.454389    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:14.492790    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:14.492804    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:14.507156    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:14.507169    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:14.518490    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:14.518502    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:14.532501    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:14.532513    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:14.548405    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:14.548417    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:14.560214    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:14.560224    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:14.577327    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:14.577343    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:14.590047    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:14.590059    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:17.102219    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:22.104348    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:22.104733    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:22.134110    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:22.134268    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:22.152142    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:22.152255    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:22.166090    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:22.166180    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:22.177712    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:22.177806    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:22.188206    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:22.188295    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:22.206303    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:22.206386    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:22.216566    3941 logs.go:276] 0 containers: []
	W0918 13:25:22.216582    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:22.216653    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:22.227351    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:22.227369    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:22.227374    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:22.240127    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:22.240154    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:22.256553    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:22.256562    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:22.268602    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:22.268613    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:22.282258    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:22.282271    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:22.319873    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:22.319884    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:22.333871    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:22.333882    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:22.359208    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:22.359219    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:22.365883    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:22.365898    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:22.378308    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:22.378323    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:22.398570    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:22.398586    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:22.413525    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:22.413540    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:22.431190    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:22.431200    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:22.468919    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:22.468929    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:22.487622    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:22.487632    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:22.499065    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:22.499076    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:22.512773    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:22.512790    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:25.032326    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:30.034439    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:30.034826    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:30.063433    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:30.063588    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:30.086485    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:30.086581    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:30.099656    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:30.099747    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:30.110954    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:30.111036    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:30.126135    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:30.126224    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:30.136904    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:30.136975    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:30.150195    3941 logs.go:276] 0 containers: []
	W0918 13:25:30.150208    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:30.150281    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:30.161279    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:30.161300    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:30.161305    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:30.178091    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:30.178101    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:30.192813    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:30.192822    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:30.197353    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:30.197361    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:30.212237    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:30.212250    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:30.224374    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:30.224385    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:30.240842    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:30.240852    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:30.275448    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:30.275465    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:30.291359    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:30.291371    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:30.328584    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:30.328593    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:30.340596    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:30.340607    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:30.352614    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:30.352627    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:30.364946    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:30.364957    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:30.383208    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:30.383222    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:30.394472    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:30.394485    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:30.405775    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:30.405790    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:30.430775    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:30.430783    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:32.945558    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:37.947669    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:37.947998    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:37.974137    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:37.974297    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:37.991421    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:37.991525    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:38.004992    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:38.005082    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:38.016641    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:38.016727    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:38.030629    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:38.030718    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:38.041316    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:38.041396    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:38.051727    3941 logs.go:276] 0 containers: []
	W0918 13:25:38.051743    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:38.051819    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:38.062305    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:38.062324    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:38.062329    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:38.085572    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:38.085581    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:38.122132    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:38.122142    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:38.162338    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:38.162354    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:38.175621    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:38.175638    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:38.190525    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:38.190536    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:38.204540    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:38.204552    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:38.216171    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:38.216182    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:38.221043    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:38.221051    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:38.235411    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:38.235422    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:38.247469    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:38.247480    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:38.261771    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:38.261783    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:38.279752    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:38.279762    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:38.291468    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:38.291477    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:38.309320    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:38.309329    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:38.320696    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:38.320707    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:38.332744    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:38.332760    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:40.846931    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:45.848971    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:45.849105    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:45.860644    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:45.860740    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:45.871921    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:45.872003    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:45.882438    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:45.882523    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:45.892864    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:45.892945    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:45.903416    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:45.903493    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:45.914373    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:45.914459    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:45.924997    3941 logs.go:276] 0 containers: []
	W0918 13:25:45.925016    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:45.925099    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:45.935690    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:45.935711    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:45.935717    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:45.940784    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:45.940791    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:45.976134    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:45.976150    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:45.990855    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:45.990866    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:46.005458    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:46.005469    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:46.020052    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:46.020062    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:46.031744    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:46.031756    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:46.043886    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:46.043897    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:46.056126    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:46.056139    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:46.067601    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:46.067610    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:46.106627    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:46.106637    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:46.123702    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:46.123713    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:46.135739    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:46.135751    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:46.158928    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:46.158937    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:46.174079    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:46.174090    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:46.189020    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:46.189031    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:46.200948    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:46.200959    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:48.714705    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:53.716744    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:53.716869    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:53.728606    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:53.728698    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:53.739501    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:53.739592    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:53.751894    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:53.751967    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:53.770522    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:53.770596    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:53.781343    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:53.781413    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:53.793893    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:53.793984    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:53.804864    3941 logs.go:276] 0 containers: []
	W0918 13:25:53.804876    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:53.804942    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:53.819159    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:53.819179    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:53.819185    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:53.853731    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:53.853746    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:53.869454    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:53.869465    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:53.883818    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:53.883833    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:53.896417    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:53.896428    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:53.909853    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:53.909863    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:53.921267    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:53.921282    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:53.933782    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:53.933796    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:53.946043    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:53.946052    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:53.969301    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:53.969311    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:53.973442    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:53.973449    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:53.994725    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:53.994739    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:54.006012    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:54.006023    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:54.020874    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:54.020888    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:54.056685    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:54.056693    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:54.067689    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:54.067699    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:54.084724    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:54.084737    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:56.605537    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:01.606437    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:01.606798    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:01.633954    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:01.634108    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:01.651790    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:01.651888    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:01.671072    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:01.671161    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:01.681974    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:01.682061    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:01.695663    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:01.695747    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:01.707733    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:01.707818    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:01.719215    3941 logs.go:276] 0 containers: []
	W0918 13:26:01.719227    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:01.719303    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:01.729904    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:01.729924    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:01.729930    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:01.742845    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:01.742858    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:01.755819    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:01.755830    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:01.771903    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:01.771914    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:01.784132    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:01.784146    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:01.795597    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:01.795607    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:01.831264    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:01.831280    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:01.846273    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:01.846286    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:01.862088    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:01.862099    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:01.873670    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:01.873681    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:01.899424    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:01.899435    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:01.904178    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:01.904186    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:01.916207    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:01.916218    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:01.931193    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:01.931203    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:01.949237    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:01.949249    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:01.987348    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:01.987357    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:02.008886    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:02.008896    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:04.522655    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:09.524882    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:09.525391    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:09.569529    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:09.569701    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:09.594985    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:09.595116    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:09.609818    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:09.609911    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:09.622097    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:09.622180    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:09.633706    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:09.633791    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:09.644779    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:09.644853    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:09.657016    3941 logs.go:276] 0 containers: []
	W0918 13:26:09.657027    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:09.657097    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:09.668251    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:09.668271    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:09.668277    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:09.703687    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:09.703700    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:09.724878    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:09.724888    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:09.736214    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:09.736226    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:09.758038    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:09.758049    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:09.776708    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:09.776720    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:09.816956    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:09.816968    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:09.830404    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:09.830414    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:09.841519    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:09.841533    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:09.853611    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:09.853625    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:09.857773    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:09.857781    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:09.872284    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:09.872295    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:09.884322    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:09.884332    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:09.908310    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:09.908318    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:09.931826    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:09.931837    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:09.943228    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:09.943239    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:09.955684    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:09.955694    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:12.472679    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:17.474839    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:17.475501    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:17.518747    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:17.518917    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:17.539477    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:17.539601    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:17.554436    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:17.554528    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:17.567259    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:17.567351    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:17.578603    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:17.578681    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:17.589463    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:17.589552    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:17.600117    3941 logs.go:276] 0 containers: []
	W0918 13:26:17.600131    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:17.600214    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:17.613395    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:17.613439    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:17.613449    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:17.649522    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:17.649534    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:17.665868    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:17.665881    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:17.677112    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:17.677126    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:17.688336    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:17.688346    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:17.701602    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:17.701614    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:17.715602    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:17.715616    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:17.727257    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:17.727272    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:17.739291    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:17.739305    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:17.755965    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:17.755979    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:17.773460    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:17.773475    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:17.784713    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:17.784728    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:17.819352    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:17.819368    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:17.847410    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:17.847425    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:17.870217    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:17.870226    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:17.908325    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:17.908334    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:17.912477    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:17.912483    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:20.424636    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:25.426499    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:25.426772    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:25.448000    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:25.448141    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:25.462404    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:25.462500    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:25.474825    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:25.474909    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:25.485712    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:25.485795    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:25.497623    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:25.497706    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:25.507874    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:25.507957    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:25.518245    3941 logs.go:276] 0 containers: []
	W0918 13:26:25.518256    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:25.518332    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:25.528845    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:25.528865    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:25.528873    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:25.533117    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:25.533125    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:25.567433    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:25.567444    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:25.581599    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:25.581615    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:25.593511    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:25.593522    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:25.605307    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:25.605318    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:25.616644    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:25.616655    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:25.629776    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:25.629786    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:25.643987    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:25.643997    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:25.656436    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:25.656447    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:25.668801    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:25.668813    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:25.680270    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:25.680281    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:25.694174    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:25.694184    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:25.708933    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:25.708945    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:25.726966    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:25.726977    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:25.738646    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:25.738658    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:25.775855    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:25.775864    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:28.299436    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:33.301588    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:33.302076    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:33.338259    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:33.338421    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:33.359343    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:33.359461    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:33.374010    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:33.374107    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:33.386076    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:33.386162    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:33.397219    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:33.397296    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:33.409804    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:33.409895    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:33.420056    3941 logs.go:276] 0 containers: []
	W0918 13:26:33.420070    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:33.420153    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:33.431386    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:33.431406    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:33.431411    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:33.443791    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:33.443800    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:33.448667    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:33.448673    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:33.463721    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:33.463730    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:33.476812    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:33.476821    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:33.491160    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:33.491170    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:33.506449    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:33.506461    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:33.528650    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:33.528657    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:33.564676    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:33.564687    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:33.578613    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:33.578623    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:33.591152    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:33.591164    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:33.603854    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:33.603865    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:33.618707    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:33.618717    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:33.630342    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:33.630352    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:33.641340    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:33.641351    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:33.656764    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:33.656775    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:33.692796    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:33.692807    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:36.212557    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:41.213334    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:41.213623    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:41.240063    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:41.240219    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:41.256890    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:41.256987    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:41.269537    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:41.269629    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:41.285476    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:41.285564    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:41.296103    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:41.296191    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:41.306701    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:41.306789    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:41.316906    3941 logs.go:276] 0 containers: []
	W0918 13:26:41.316917    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:41.316989    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:41.327557    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:41.327573    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:41.327578    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:41.362279    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:41.362291    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:41.377678    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:41.377688    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:41.388966    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:41.388977    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:41.400283    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:41.400297    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:41.439552    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:41.439567    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:41.452010    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:41.452023    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:41.466129    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:41.466138    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:41.477292    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:41.477307    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:41.488900    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:41.488912    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:41.493417    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:41.493424    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:41.507702    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:41.507712    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:41.524643    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:41.524656    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:41.547175    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:41.547184    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:41.559444    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:41.559453    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:41.573259    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:41.573273    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:41.585483    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:41.585493    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:44.099067    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:49.101248    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:49.101577    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:49.125529    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:49.125677    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:49.140823    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:49.140916    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:49.157466    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:49.157560    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:49.181342    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:49.181435    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:49.194571    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:49.194647    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:49.205454    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:49.205537    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:49.215546    3941 logs.go:276] 0 containers: []
	W0918 13:26:49.215558    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:49.215630    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:49.226381    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:49.226399    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:49.226404    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:49.230821    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:49.230827    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:49.241725    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:49.241737    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:49.253598    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:49.253611    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:49.289662    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:49.289670    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:49.302049    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:49.302062    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:49.316524    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:49.316539    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:49.333432    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:49.333445    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:49.345114    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:49.345129    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:49.368574    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:49.368581    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:49.386698    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:49.386709    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:49.401648    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:49.401660    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:49.413111    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:49.413123    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:49.424582    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:49.424595    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:49.458563    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:49.458576    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:49.470210    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:49.470221    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:49.485201    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:49.485212    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:51.996768    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:56.998873    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:56.998973    3941 kubeadm.go:597] duration metric: took 4m3.754986625s to restartPrimaryControlPlane
	W0918 13:26:56.999040    3941 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 13:26:56.999073    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0918 13:26:57.977863    3941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 13:26:57.983113    3941 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 13:26:57.986059    3941 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 13:26:57.988794    3941 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 13:26:57.988801    3941 kubeadm.go:157] found existing configuration files:
	
	I0918 13:26:57.988827    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/admin.conf
	I0918 13:26:57.991904    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 13:26:57.991935    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 13:26:57.995312    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/kubelet.conf
	I0918 13:26:57.998024    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 13:26:57.998052    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 13:26:58.000929    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/controller-manager.conf
	I0918 13:26:58.003794    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 13:26:58.003816    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 13:26:58.007206    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/scheduler.conf
	I0918 13:26:58.009890    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 13:26:58.009918    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 13:26:58.012528    3941 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 13:26:58.030142    3941 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0918 13:26:58.030335    3941 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 13:26:58.075461    3941 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 13:26:58.075526    3941 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 13:26:58.075576    3941 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 13:26:58.125234    3941 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 13:26:58.129390    3941 out.go:235]   - Generating certificates and keys ...
	I0918 13:26:58.129521    3941 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 13:26:58.129668    3941 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 13:26:58.129714    3941 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 13:26:58.129769    3941 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 13:26:58.129839    3941 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 13:26:58.129878    3941 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 13:26:58.130007    3941 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 13:26:58.130098    3941 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 13:26:58.130206    3941 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 13:26:58.130303    3941 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 13:26:58.130357    3941 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 13:26:58.130446    3941 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 13:26:58.289522    3941 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 13:26:58.360452    3941 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 13:26:58.465958    3941 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 13:26:58.512575    3941 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 13:26:58.540158    3941 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 13:26:58.540539    3941 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 13:26:58.540589    3941 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 13:26:58.632066    3941 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 13:26:58.635215    3941 out.go:235]   - Booting up control plane ...
	I0918 13:26:58.635259    3941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 13:26:58.635294    3941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 13:26:58.635329    3941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 13:26:58.635379    3941 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 13:26:58.635463    3941 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 13:27:03.641998    3941 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.006945 seconds
	I0918 13:27:03.642266    3941 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 13:27:03.657488    3941 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 13:27:04.174112    3941 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 13:27:04.174224    3941 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-314000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 13:27:04.683307    3941 kubeadm.go:310] [bootstrap-token] Using token: 8lhv3k.f2rxbxynoqw4hg0y
	I0918 13:27:04.689452    3941 out.go:235]   - Configuring RBAC rules ...
	I0918 13:27:04.689506    3941 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 13:27:04.689558    3941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 13:27:04.696078    3941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 13:27:04.696941    3941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 13:27:04.697863    3941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 13:27:04.698624    3941 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 13:27:04.702058    3941 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 13:27:04.885389    3941 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 13:27:05.087245    3941 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 13:27:05.087557    3941 kubeadm.go:310] 
	I0918 13:27:05.087631    3941 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 13:27:05.087638    3941 kubeadm.go:310] 
	I0918 13:27:05.087756    3941 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 13:27:05.087791    3941 kubeadm.go:310] 
	I0918 13:27:05.087842    3941 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 13:27:05.087877    3941 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 13:27:05.087916    3941 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 13:27:05.087922    3941 kubeadm.go:310] 
	I0918 13:27:05.087947    3941 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 13:27:05.087949    3941 kubeadm.go:310] 
	I0918 13:27:05.087970    3941 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 13:27:05.087972    3941 kubeadm.go:310] 
	I0918 13:27:05.087994    3941 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 13:27:05.088032    3941 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 13:27:05.088071    3941 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 13:27:05.088075    3941 kubeadm.go:310] 
	I0918 13:27:05.088119    3941 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 13:27:05.088158    3941 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 13:27:05.088160    3941 kubeadm.go:310] 
	I0918 13:27:05.088195    3941 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8lhv3k.f2rxbxynoqw4hg0y \
	I0918 13:27:05.088240    3941 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:491fed232b633ec8404b91d551b715c799429ab9f4658c5350f7586533e73a75 \
	I0918 13:27:05.088254    3941 kubeadm.go:310] 	--control-plane 
	I0918 13:27:05.088256    3941 kubeadm.go:310] 
	I0918 13:27:05.088295    3941 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 13:27:05.088301    3941 kubeadm.go:310] 
	I0918 13:27:05.088342    3941 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8lhv3k.f2rxbxynoqw4hg0y \
	I0918 13:27:05.088403    3941 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:491fed232b633ec8404b91d551b715c799429ab9f4658c5350f7586533e73a75 
	I0918 13:27:05.088462    3941 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 13:27:05.088470    3941 cni.go:84] Creating CNI manager for ""
	I0918 13:27:05.088478    3941 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:27:05.093200    3941 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 13:27:05.101148    3941 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 13:27:05.104223    3941 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 13:27:05.109374    3941 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 13:27:05.109434    3941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 13:27:05.109454    3941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-314000 minikube.k8s.io/updated_at=2024_09_18T13_27_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=running-upgrade-314000 minikube.k8s.io/primary=true
	I0918 13:27:05.113528    3941 ops.go:34] apiserver oom_adj: -16
	I0918 13:27:05.156078    3941 kubeadm.go:1113] duration metric: took 46.696333ms to wait for elevateKubeSystemPrivileges
	I0918 13:27:05.156168    3941 kubeadm.go:394] duration metric: took 4m11.926200792s to StartCluster
	I0918 13:27:05.156182    3941 settings.go:142] acquiring lock: {Name:mkbb043d0459391a7d922bd686e90e22968feef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:27:05.156272    3941 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:27:05.156641    3941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/kubeconfig: {Name:mkc39e19086c32e3258f75506afcbcc582926b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:27:05.156828    3941 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:27:05.156850    3941 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 13:27:05.156884    3941 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-314000"
	I0918 13:27:05.156892    3941 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-314000"
	W0918 13:27:05.156897    3941 addons.go:243] addon storage-provisioner should already be in state true
	I0918 13:27:05.156910    3941 host.go:66] Checking if "running-upgrade-314000" exists ...
	I0918 13:27:05.156934    3941 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-314000"
	I0918 13:27:05.156938    3941 config.go:182] Loaded profile config "running-upgrade-314000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0918 13:27:05.156999    3941 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-314000"
	I0918 13:27:05.161136    3941 out.go:177] * Verifying Kubernetes components...
	I0918 13:27:05.161755    3941 kapi.go:59] client config for running-upgrade-314000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/client.key", CAFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105df9800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 13:27:05.165470    3941 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-314000"
	W0918 13:27:05.165475    3941 addons.go:243] addon default-storageclass should already be in state true
	I0918 13:27:05.165484    3941 host.go:66] Checking if "running-upgrade-314000" exists ...
	I0918 13:27:05.166020    3941 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 13:27:05.166025    3941 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 13:27:05.166031    3941 sshutil.go:53] new ssh client: &{IP:localhost Port:50220 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/running-upgrade-314000/id_rsa Username:docker}
	I0918 13:27:05.169095    3941 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:27:05.173227    3941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:27:05.177194    3941 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 13:27:05.177201    3941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 13:27:05.177208    3941 sshutil.go:53] new ssh client: &{IP:localhost Port:50220 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/running-upgrade-314000/id_rsa Username:docker}
	I0918 13:27:05.268116    3941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 13:27:05.273425    3941 api_server.go:52] waiting for apiserver process to appear ...
	I0918 13:27:05.273475    3941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:27:05.277443    3941 api_server.go:72] duration metric: took 120.60575ms to wait for apiserver process to appear ...
	I0918 13:27:05.277452    3941 api_server.go:88] waiting for apiserver healthz status ...
	I0918 13:27:05.277458    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:05.291367    3941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 13:27:05.304145    3941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 13:27:05.626580    3941 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0918 13:27:05.626594    3941 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0918 13:27:10.279397    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:10.279430    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:15.280086    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:15.280113    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:20.280420    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:20.280461    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:25.280927    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:25.280964    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:30.281623    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:30.281679    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:35.282528    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:35.282578    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0918 13:27:35.628114    3941 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0918 13:27:35.636264    3941 out.go:177] * Enabled addons: storage-provisioner
	I0918 13:27:35.644228    3941 addons.go:510] duration metric: took 30.488177417s for enable addons: enabled=[storage-provisioner]
	I0918 13:27:40.283651    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:40.283691    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:45.285109    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:45.285147    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:50.287241    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:50.287281    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:55.289390    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:55.289410    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:00.291037    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:00.291083    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:05.292064    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:05.292192    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:05.305218    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:05.305303    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:05.316310    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:05.316396    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:05.326939    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:05.327023    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:05.337843    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:05.337926    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:05.348354    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:05.348452    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:05.359340    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:05.359431    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:05.369439    3941 logs.go:276] 0 containers: []
	W0918 13:28:05.369451    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:05.369523    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:05.379887    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:05.379904    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:05.379909    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:05.394495    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:05.394506    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:05.406372    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:05.406386    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:05.418683    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:05.418697    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:05.437179    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:05.437190    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:05.455798    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:05.455814    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:05.468229    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:05.468243    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:05.494173    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:05.494181    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:05.506111    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:05.506122    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:05.540968    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:05.540980    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:05.545333    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:05.545343    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:05.581880    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:05.581889    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:05.595752    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:05.595767    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:08.109541    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:13.111760    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:13.111869    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:13.123156    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:13.123246    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:13.133950    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:13.134038    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:13.144339    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:13.144415    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:13.155500    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:13.155587    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:13.166808    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:13.166893    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:13.177174    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:13.177254    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:13.188070    3941 logs.go:276] 0 containers: []
	W0918 13:28:13.188086    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:13.188158    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:13.198589    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:13.198608    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:13.198613    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:13.219014    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:13.219024    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:13.236381    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:13.236396    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:13.248996    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:13.249011    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:13.253582    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:13.253591    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:13.290517    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:13.290530    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:13.304360    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:13.304372    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:13.316943    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:13.316954    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:13.328898    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:13.328909    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:13.352354    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:13.352365    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:13.386905    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:13.386914    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:13.400947    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:13.400958    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:13.412911    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:13.412926    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:15.929703    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:20.931970    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:20.932161    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:20.950073    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:20.950183    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:20.964117    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:20.964212    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:20.978804    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:20.978883    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:20.989603    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:20.989671    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:21.000088    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:21.000173    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:21.010456    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:21.010535    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:21.021360    3941 logs.go:276] 0 containers: []
	W0918 13:28:21.021371    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:21.021436    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:21.038503    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:21.038518    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:21.038523    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:21.072361    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:21.072369    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:21.087362    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:21.087374    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:21.099407    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:21.099418    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:21.123369    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:21.123380    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:21.135606    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:21.135618    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:21.140707    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:21.140719    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:21.175370    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:21.175381    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:21.189844    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:21.189853    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:21.203537    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:21.203547    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:21.218092    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:21.218102    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:21.229705    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:21.229716    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:21.246911    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:21.246922    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:23.760425    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:28.762614    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:28.762835    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:28.778878    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:28.778984    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:28.795903    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:28.795986    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:28.807054    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:28.807340    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:28.820291    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:28.820379    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:28.831098    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:28.831187    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:28.842056    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:28.842140    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:28.852614    3941 logs.go:276] 0 containers: []
	W0918 13:28:28.852627    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:28.852701    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:28.863373    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:28.863387    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:28.863393    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:28.896593    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:28.896604    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:28.910513    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:28.910527    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:28.925523    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:28.925532    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:28.939109    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:28.939122    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:28.951120    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:28.951131    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:28.968819    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:28.968833    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:28.984743    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:28.984758    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:29.009941    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:29.009948    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:29.014491    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:29.014499    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:29.056868    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:29.056880    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:29.071773    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:29.071783    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:29.082981    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:29.082992    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:31.595221    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:36.597304    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:36.597519    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:36.615806    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:36.615926    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:36.631706    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:36.631801    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:36.647489    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:36.647575    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:36.658515    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:36.658597    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:36.672547    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:36.672619    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:36.682945    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:36.683030    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:36.693581    3941 logs.go:276] 0 containers: []
	W0918 13:28:36.693594    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:36.693680    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:36.705107    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:36.705122    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:36.705129    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:36.718734    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:36.718748    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:36.730592    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:36.730602    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:36.742150    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:36.742161    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:36.759167    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:36.759177    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:36.771404    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:36.771417    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:36.795502    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:36.795513    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:36.806862    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:36.806874    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:36.840233    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:36.840241    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:36.844613    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:36.844620    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:36.878700    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:36.878715    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:36.893647    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:36.893659    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:36.905594    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:36.905609    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:39.422269    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:44.424452    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:44.424627    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:44.442446    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:44.442559    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:44.456602    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:44.456697    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:44.469310    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:44.469396    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:44.480011    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:44.480091    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:44.490656    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:44.490748    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:44.501672    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:44.501759    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:44.512247    3941 logs.go:276] 0 containers: []
	W0918 13:28:44.512263    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:44.512327    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:44.526946    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:44.526962    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:44.526967    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:44.544699    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:44.544710    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:44.556204    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:44.556217    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:44.560866    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:44.560872    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:44.575381    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:44.575392    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:44.590281    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:44.590291    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:44.601407    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:44.601417    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:44.615866    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:44.615876    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:44.627772    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:44.627783    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:44.651407    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:44.651415    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:44.663164    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:44.663175    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:44.696892    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:44.696902    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:44.736919    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:44.736934    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:47.250666    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:52.252748    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:52.253022    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:52.275002    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:52.275127    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:52.291300    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:52.291399    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:52.303650    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:52.303738    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:52.314573    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:52.314660    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:52.324845    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:52.324938    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:52.335441    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:52.335523    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:52.345387    3941 logs.go:276] 0 containers: []
	W0918 13:28:52.345398    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:52.345471    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:52.356413    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:52.356428    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:52.356434    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:52.377189    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:52.377202    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:52.381804    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:52.381810    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:52.420149    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:52.420160    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:52.434384    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:52.434394    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:52.446521    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:52.446532    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:52.458401    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:52.458412    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:52.470015    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:52.470027    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:52.494090    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:52.494098    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:52.505755    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:52.505766    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:52.538767    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:52.538777    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:52.566134    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:52.566144    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:52.581716    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:52.581728    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:55.098480    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:00.100572    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:00.100769    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:00.115844    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:00.115947    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:00.128060    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:00.128146    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:00.139107    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:00.139191    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:00.149726    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:00.149802    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:00.160286    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:00.160375    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:00.171287    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:00.171357    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:00.181955    3941 logs.go:276] 0 containers: []
	W0918 13:29:00.181971    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:00.182030    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:00.192418    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:00.192439    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:00.192445    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:00.226694    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:00.226711    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:00.241083    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:00.241095    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:00.265777    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:00.265785    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:00.277766    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:00.277777    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:00.294107    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:00.294117    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:00.305437    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:00.305449    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:00.322448    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:00.322459    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:00.355838    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:00.355848    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:00.360450    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:00.360457    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:00.374790    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:00.374799    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:00.386878    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:00.386889    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:00.399321    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:00.399332    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:02.912607    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:07.914436    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:07.914699    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:07.933414    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:07.933532    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:07.948785    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:07.948873    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:07.960542    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:07.960632    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:07.971323    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:07.971405    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:07.982393    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:07.982472    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:07.993456    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:07.993541    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:08.003849    3941 logs.go:276] 0 containers: []
	W0918 13:29:08.003861    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:08.003929    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:08.014994    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:08.015010    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:08.015016    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:08.028993    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:08.029009    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:08.042617    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:08.042628    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:08.058832    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:08.058844    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:08.070627    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:08.070640    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:08.095582    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:08.095592    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:08.106876    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:08.106885    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:08.141846    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:08.141857    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:08.146253    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:08.146258    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:08.187139    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:08.187154    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:08.199726    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:08.199737    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:08.218902    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:08.218915    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:08.237398    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:08.237410    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:10.750706    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:15.752574    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:15.752790    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:15.770737    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:15.770844    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:15.785679    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:15.785767    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:15.796502    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:15.796593    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:15.811353    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:15.811429    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:15.822101    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:15.822191    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:15.833457    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:15.833532    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:15.843402    3941 logs.go:276] 0 containers: []
	W0918 13:29:15.843413    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:15.843481    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:15.854869    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:15.854885    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:15.854890    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:15.872240    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:15.872248    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:15.883956    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:15.883970    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:15.907592    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:15.907600    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:15.940457    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:15.940466    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:15.975765    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:15.975777    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:15.991175    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:15.991185    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:16.006069    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:16.006083    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:16.018400    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:16.018412    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:16.023324    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:16.023332    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:16.035606    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:16.035621    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:16.047270    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:16.047284    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:16.062694    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:16.062708    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:18.576106    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:23.578187    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:23.578377    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:23.590394    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:23.590490    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:23.608750    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:23.608843    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:23.621118    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:23.621210    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:23.631466    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:23.631553    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:23.642063    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:23.642151    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:23.652585    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:23.652669    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:23.662850    3941 logs.go:276] 0 containers: []
	W0918 13:29:23.662862    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:23.662930    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:23.673279    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:23.673298    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:29:23.673303    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:29:23.684740    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:23.684753    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:23.697235    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:23.697247    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:23.710067    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:23.710080    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:23.725900    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:23.725911    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:23.751615    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:23.751626    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:23.785208    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:23.785216    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:23.799560    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:23.799571    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:23.813937    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:29:23.813948    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:29:23.825415    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:23.825425    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:23.843106    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:23.843116    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:23.847944    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:23.847951    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:23.885127    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:23.885138    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:23.897191    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:23.897206    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:23.909736    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:23.909749    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:26.426434    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:31.428564    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:31.428771    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:31.444391    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:31.444487    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:31.457782    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:31.457866    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:31.470785    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:31.470879    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:31.482315    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:31.482422    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:31.493177    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:31.493257    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:31.504207    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:31.504282    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:31.515087    3941 logs.go:276] 0 containers: []
	W0918 13:29:31.515102    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:31.515171    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:31.531541    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:31.531557    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:31.531564    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:31.536256    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:31.536262    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:31.550562    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:29:31.550575    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:29:31.562052    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:31.562062    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:31.579921    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:31.579935    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:31.612299    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:31.612307    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:31.624022    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:31.624037    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:31.635977    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:31.635987    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:31.651276    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:31.651289    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:31.663010    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:31.663023    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:31.677329    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:31.677343    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:31.689180    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:31.689196    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:31.705799    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:31.705816    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:31.741106    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:31.741118    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:31.767041    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:29:31.767051    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:29:34.279596    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:39.280761    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:39.280990    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:39.301084    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:39.301200    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:39.315666    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:39.315748    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:39.328017    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:39.328112    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:39.339420    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:39.339507    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:39.350342    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:39.350423    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:39.361458    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:39.361540    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:39.371362    3941 logs.go:276] 0 containers: []
	W0918 13:29:39.371378    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:39.371449    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:39.381537    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:39.381555    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:39.381561    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:39.394373    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:39.394385    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:39.406959    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:29:39.406972    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:29:39.418659    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:39.418674    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:39.430490    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:39.430501    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:39.465873    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:39.465887    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:39.480666    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:39.480676    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:39.499712    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:39.499725    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:39.515295    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:39.515305    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:39.549925    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:39.549935    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:39.554691    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:39.554700    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:39.566317    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:39.566329    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:39.590964    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:39.590972    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:39.608088    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:29:39.608097    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:29:39.620062    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:39.620076    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:42.133375    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:47.135583    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:47.135904    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:47.157673    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:47.157798    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:47.182940    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:47.183038    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:47.196036    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:47.196113    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:47.206575    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:47.206665    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:47.217014    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:47.217095    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:47.227268    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:47.227351    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:47.237929    3941 logs.go:276] 0 containers: []
	W0918 13:29:47.237941    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:47.238013    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:47.248525    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:47.248543    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:47.248548    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:47.262957    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:47.262970    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:47.279067    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:29:47.279080    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:29:47.291784    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:47.291794    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:47.316487    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:47.316496    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:47.350487    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:47.350495    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:47.386109    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:47.386122    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:47.400225    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:29:47.400240    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:29:47.411733    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:47.411747    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:47.416120    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:47.416127    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:47.427778    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:47.427792    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:47.445001    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:47.445017    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:47.457595    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:47.457611    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:47.469771    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:47.469786    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:47.481169    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:47.481191    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:49.996902    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:54.998975    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:54.999135    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:55.012597    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:55.012690    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:55.023365    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:55.023454    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:55.033939    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:55.034034    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:55.044596    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:55.044681    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:55.057906    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:55.057986    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:55.069117    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:55.069198    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:55.079236    3941 logs.go:276] 0 containers: []
	W0918 13:29:55.079248    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:55.079320    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:55.089514    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:55.089530    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:55.089535    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:55.103535    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:55.103544    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:55.119168    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:55.119180    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:55.130677    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:29:55.130688    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:29:55.142948    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:55.142963    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:55.155164    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:55.155180    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:55.178970    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:55.178977    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:55.220129    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:55.220144    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:55.235141    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:55.235151    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:55.246724    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:55.246735    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:55.258758    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:55.258771    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:55.281811    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:55.281822    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:55.316668    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:55.316679    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:55.320912    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:55.320919    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:55.334916    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:29:55.334931    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:29:57.848711    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:02.850868    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:02.851136    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:02.875761    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:02.875894    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:02.892196    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:02.892297    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:02.905019    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:02.905116    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:02.915969    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:02.916054    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:02.926701    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:02.926786    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:02.938228    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:02.938304    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:02.948996    3941 logs.go:276] 0 containers: []
	W0918 13:30:02.949007    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:02.949073    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:02.996083    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:02.996102    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:02.996107    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:03.010001    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:03.010012    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:03.028476    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:03.028485    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:03.039717    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:03.039729    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:03.052150    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:03.052161    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:03.067693    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:03.067705    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:03.082517    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:03.082529    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:03.087037    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:03.087044    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:03.100749    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:03.100760    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:03.126005    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:03.126015    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:03.137592    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:03.137603    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:03.152515    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:03.152526    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:03.188057    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:03.188069    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:03.206158    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:03.206169    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:03.240595    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:03.240607    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:05.755138    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:10.757368    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:10.757654    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:10.779458    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:10.779595    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:10.796253    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:10.796339    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:10.809422    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:10.809509    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:10.824714    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:10.824795    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:10.835758    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:10.835834    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:10.848729    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:10.848796    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:10.858948    3941 logs.go:276] 0 containers: []
	W0918 13:30:10.858960    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:10.859027    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:10.870213    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:10.870232    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:10.870237    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:10.882227    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:10.882238    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:10.897084    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:10.897098    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:10.909002    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:10.909014    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:10.921007    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:10.921017    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:10.932151    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:10.932163    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:10.966553    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:10.966564    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:10.978944    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:10.978956    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:11.000763    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:11.000777    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:11.015570    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:11.015583    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:11.020945    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:11.020957    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:11.032694    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:11.032706    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:11.058205    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:11.058225    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:11.094295    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:11.094306    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:11.112538    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:11.112550    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:13.629361    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:18.631505    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:18.631782    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:18.649907    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:18.650002    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:18.664205    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:18.664288    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:18.675778    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:18.675868    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:18.686747    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:18.686834    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:18.702125    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:18.702202    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:18.712538    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:18.712625    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:18.722867    3941 logs.go:276] 0 containers: []
	W0918 13:30:18.722878    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:18.722952    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:18.734605    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:18.734621    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:18.734626    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:18.769335    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:18.769345    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:18.781306    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:18.781317    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:18.792896    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:18.792906    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:18.807821    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:18.807832    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:18.825164    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:18.825174    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:18.850793    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:18.850803    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:18.855657    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:18.855663    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:18.870000    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:18.870015    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:18.884704    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:18.884716    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:18.896276    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:18.896285    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:18.912190    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:18.912200    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:18.925196    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:18.925210    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:18.960465    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:18.960476    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:18.973215    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:18.973228    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:21.487285    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:26.489525    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:26.489762    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:26.507090    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:26.507205    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:26.520278    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:26.520356    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:26.531754    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:26.531840    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:26.542308    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:26.542397    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:26.553049    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:26.553139    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:26.564588    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:26.564664    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:26.578839    3941 logs.go:276] 0 containers: []
	W0918 13:30:26.578850    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:26.578917    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:26.588987    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:26.589004    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:26.589009    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:26.600681    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:26.600695    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:26.612505    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:26.612519    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:26.627442    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:26.627457    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:26.644884    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:26.644899    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:26.658608    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:26.658621    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:26.693917    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:26.693929    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:26.706033    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:26.706047    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:26.717696    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:26.717710    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:26.730224    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:26.730236    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:26.734726    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:26.734734    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:26.749298    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:26.749311    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:26.774473    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:26.774480    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:26.808124    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:26.808131    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:26.819942    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:26.819951    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:29.334322    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:34.336016    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:34.336231    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:34.353641    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:34.353753    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:34.367894    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:34.367987    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:34.381552    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:34.381630    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:34.392334    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:34.392417    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:34.403611    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:34.403694    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:34.420113    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:34.420188    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:34.430460    3941 logs.go:276] 0 containers: []
	W0918 13:30:34.430475    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:34.430547    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:34.441285    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:34.441302    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:34.441310    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:34.446450    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:34.446460    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:34.461841    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:34.461857    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:34.476246    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:34.476257    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:34.488037    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:34.488048    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:34.501082    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:34.501096    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:34.512548    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:34.512561    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:34.535321    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:34.535328    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:34.567441    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:34.567450    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:34.578801    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:34.578810    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:34.590292    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:34.590305    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:34.604839    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:34.604852    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:34.616165    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:34.616179    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:34.650253    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:34.650267    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:34.669033    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:34.669047    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:37.181884    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:42.183975    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:42.184300    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:42.212855    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:42.212990    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:42.250748    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:42.250842    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:42.268863    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:42.268955    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:42.282361    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:42.282452    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:42.292845    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:42.292923    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:42.303660    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:42.303736    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:42.313692    3941 logs.go:276] 0 containers: []
	W0918 13:30:42.313708    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:42.313786    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:42.324400    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:42.324415    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:42.324421    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:42.329591    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:42.329601    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:42.341550    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:42.341560    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:42.356294    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:42.356303    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:42.381329    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:42.381341    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:42.393791    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:42.393807    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:42.427245    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:42.427256    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:42.443191    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:42.443203    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:42.461994    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:42.462005    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:42.476499    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:42.476509    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:42.488743    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:42.488754    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:42.500819    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:42.500831    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:42.512238    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:42.512250    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:42.546839    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:42.546854    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:42.559139    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:42.559150    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:45.072818    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:50.074893    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:50.075153    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:50.095589    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:50.095705    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:50.109479    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:50.109554    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:50.121542    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:50.121624    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:50.132039    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:50.132113    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:50.142539    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:50.142613    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:50.153967    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:50.154049    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:50.164722    3941 logs.go:276] 0 containers: []
	W0918 13:30:50.164733    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:50.164797    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:50.175287    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:50.175304    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:50.175309    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:50.187285    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:50.187296    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:50.205441    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:50.205452    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:50.218458    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:50.218473    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:50.254379    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:50.254388    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:50.266205    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:50.266216    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:50.278387    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:50.278397    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:50.290219    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:50.290228    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:50.302130    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:50.302140    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:50.336861    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:50.336873    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:50.352519    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:50.352529    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:50.366765    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:50.366775    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:50.378595    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:50.378605    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:50.383399    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:50.383407    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:50.403320    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:50.403335    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:52.928660    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:57.930846    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:57.931088    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:57.948838    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:57.948948    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:57.962096    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:57.962177    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:57.976330    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:57.976405    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:57.986733    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:57.986802    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:57.998210    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:57.998300    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:58.009098    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:58.009190    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:58.019598    3941 logs.go:276] 0 containers: []
	W0918 13:30:58.019611    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:58.019686    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:58.030223    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:58.030241    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:58.030247    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:58.042163    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:58.042178    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:58.060356    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:58.060368    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:58.082984    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:58.082995    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:58.106503    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:58.106511    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:58.118108    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:58.118119    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:58.131987    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:58.132002    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:58.146826    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:58.146837    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:58.158982    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:58.158994    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:58.193208    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:58.193217    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:58.197967    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:58.197977    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:58.212705    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:58.212720    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:58.225895    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:58.225906    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:58.247013    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:58.247027    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:58.263100    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:58.263111    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:31:00.797573    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:31:05.799836    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:31:05.806029    3941 out.go:201] 
	W0918 13:31:05.809078    3941 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0918 13:31:05.809102    3941 out.go:270] * 
	* 
	W0918 13:31:05.810924    3941 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:31:05.824959    3941 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-314000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-18 13:31:05.955934 -0700 PDT m=+3237.058006751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-314000 -n running-upgrade-314000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-314000 -n running-upgrade-314000: exit status 2 (15.5473025s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-314000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p test-preload-913000         | test-preload-913000       | jenkins | v1.34.0 | 18 Sep 24 13:20 PDT | 18 Sep 24 13:20 PDT |
	| start   | -p scheduled-stop-962000       | scheduled-stop-962000     | jenkins | v1.34.0 | 18 Sep 24 13:20 PDT |                     |
	|         | --memory=2048 --driver=qemu2   |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-962000       | scheduled-stop-962000     | jenkins | v1.34.0 | 18 Sep 24 13:20 PDT | 18 Sep 24 13:20 PDT |
	| start   | -p skaffold-127000             | skaffold-127000           | jenkins | v1.34.0 | 18 Sep 24 13:20 PDT |                     |
	|         | --memory=2600 --driver=qemu2   |                           |         |         |                     |                     |
	| delete  | -p skaffold-127000             | skaffold-127000           | jenkins | v1.34.0 | 18 Sep 24 13:20 PDT | 18 Sep 24 13:20 PDT |
	| start   | -p offline-docker-716000       | offline-docker-716000     | jenkins | v1.34.0 | 18 Sep 24 13:20 PDT |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --memory=2048 --wait=true      |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-748000         | NoKubernetes-748000       | jenkins | v1.34.0 | 18 Sep 24 13:20 PDT |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-748000         | NoKubernetes-748000       | jenkins | v1.34.0 | 18 Sep 24 13:20 PDT |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| delete  | -p offline-docker-716000       | offline-docker-716000     | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT | 18 Sep 24 13:21 PDT |
	| start   | -p kubernetes-upgrade-593000   | kubernetes-upgrade-593000 | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-748000         | NoKubernetes-748000       | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT |                     |
	|         | --no-kubernetes --driver=qemu2 |                           |         |         |                     |                     |
	|         |                                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-748000         | NoKubernetes-748000       | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT |                     |
	|         | --no-kubernetes --driver=qemu2 |                           |         |         |                     |                     |
	|         |                                |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-593000   | kubernetes-upgrade-593000 | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT | 18 Sep 24 13:21 PDT |
	| start   | -p kubernetes-upgrade-593000   | kubernetes-upgrade-593000 | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-748000 sudo    | NoKubernetes-748000       | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-748000         | NoKubernetes-748000       | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT | 18 Sep 24 13:21 PDT |
	| start   | -p NoKubernetes-748000         | NoKubernetes-748000       | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-593000   | kubernetes-upgrade-593000 | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT | 18 Sep 24 13:21 PDT |
	| start   | -p stopped-upgrade-367000      | minikube                  | jenkins | v1.26.0 | 18 Sep 24 13:21 PDT | 18 Sep 24 13:22 PDT |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-748000 sudo    | NoKubernetes-748000       | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-748000         | NoKubernetes-748000       | jenkins | v1.34.0 | 18 Sep 24 13:21 PDT | 18 Sep 24 13:21 PDT |
	| start   | -p running-upgrade-314000      | minikube                  | jenkins | v1.26.0 | 18 Sep 24 13:21 PDT | 18 Sep 24 13:22 PDT |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-314000      | running-upgrade-314000    | jenkins | v1.34.0 | 18 Sep 24 13:22 PDT |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-367000 stop    | minikube                  | jenkins | v1.26.0 | 18 Sep 24 13:22 PDT | 18 Sep 24 13:22 PDT |
	| start   | -p stopped-upgrade-367000      | stopped-upgrade-367000    | jenkins | v1.34.0 | 18 Sep 24 13:22 PDT |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=qemu2                 |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 13:22:51
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 13:22:51.424511    3992 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:22:51.424662    3992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:22:51.424666    3992 out.go:358] Setting ErrFile to fd 2...
	I0918 13:22:51.424669    3992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:22:51.424840    3992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:22:51.426050    3992 out.go:352] Setting JSON to false
	I0918 13:22:51.445059    3992 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3130,"bootTime":1726687841,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:22:51.445127    3992 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:22:51.449630    3992 out.go:177] * [stopped-upgrade-367000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:22:51.465118    3992 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:22:51.465173    3992 notify.go:220] Checking for updates...
	I0918 13:22:51.472655    3992 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:22:51.475599    3992 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:22:51.478595    3992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:22:51.481631    3992 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:22:51.483082    3992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:22:51.486914    3992 config.go:182] Loaded profile config "stopped-upgrade-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0918 13:22:51.490586    3992 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 13:22:51.493577    3992 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:22:51.497582    3992 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:22:51.504573    3992 start.go:297] selected driver: qemu2
	I0918 13:22:51.504581    3992 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50335 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0918 13:22:51.504651    3992 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:22:51.507421    3992 cni.go:84] Creating CNI manager for ""
	I0918 13:22:51.507458    3992 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:22:51.507488    3992 start.go:340] cluster config:
	{Name:stopped-upgrade-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50335 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0918 13:22:51.507552    3992 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:22:51.515630    3992 out.go:177] * Starting "stopped-upgrade-367000" primary control-plane node in "stopped-upgrade-367000" cluster
	I0918 13:22:51.519599    3992 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0918 13:22:51.519617    3992 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0918 13:22:51.519626    3992 cache.go:56] Caching tarball of preloaded images
	I0918 13:22:51.519694    3992 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:22:51.519700    3992 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0918 13:22:51.519761    3992 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/config.json ...
	I0918 13:22:51.520283    3992 start.go:360] acquireMachinesLock for stopped-upgrade-367000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:22:51.520322    3992 start.go:364] duration metric: took 32.542µs to acquireMachinesLock for "stopped-upgrade-367000"
	I0918 13:22:51.520331    3992 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:22:51.520340    3992 fix.go:54] fixHost starting: 
	I0918 13:22:51.520463    3992 fix.go:112] recreateIfNeeded on stopped-upgrade-367000: state=Stopped err=<nil>
	W0918 13:22:51.520472    3992 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:22:51.524599    3992 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-367000" ...
	I0918 13:22:47.583095    3941 docker.go:649] duration metric: took 935.51575ms to copy over tarball
	I0918 13:22:47.583176    3941 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 13:22:48.840738    3941 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.257581833s)
	I0918 13:22:48.840752    3941 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 13:22:48.857218    3941 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 13:22:48.860383    3941 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0918 13:22:48.865682    3941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:22:48.954432    3941 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 13:22:50.174992    3941 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.220577125s)
	I0918 13:22:50.175107    3941 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 13:22:50.186297    3941 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 13:22:50.186306    3941 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0918 13:22:50.186312    3941 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 13:22:50.190149    3941 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:22:50.193077    3941 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:22:50.195970    3941 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:22:50.196045    3941 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:22:50.198612    3941 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:22:50.198610    3941 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:22:50.200027    3941 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:22:50.200135    3941 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:22:50.201114    3941 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:22:50.201434    3941 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:22:50.202346    3941 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0918 13:22:50.202488    3941 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:22:50.203531    3941 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:22:50.203878    3941 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:22:50.204335    3941 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0918 13:22:50.205610    3941 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:22:50.524276    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:22:50.535848    3941 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0918 13:22:50.535874    3941 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:22:50.535946    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:22:50.546420    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0918 13:22:50.594383    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0918 13:22:50.605451    3941 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0918 13:22:50.605470    3941 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:22:50.605529    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0918 13:22:50.617076    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0918 13:22:50.621993    3941 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0918 13:22:50.622121    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:22:50.633073    3941 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0918 13:22:50.633102    3941 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:22:50.633182    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:22:50.636242    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:22:50.647281    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0918 13:22:50.647421    3941 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0918 13:22:50.652675    3941 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0918 13:22:50.652689    3941 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0918 13:22:50.652700    3941 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:22:50.652712    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0918 13:22:50.652762    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:22:50.663086    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:22:50.674544    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0918 13:22:50.676207    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0918 13:22:50.689603    3941 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0918 13:22:50.689627    3941 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:22:50.689690    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:22:50.701384    3941 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0918 13:22:50.701410    3941 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0918 13:22:50.701489    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0918 13:22:50.722614    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:22:50.724844    3941 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0918 13:22:50.724855    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0918 13:22:50.734905    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0918 13:22:50.734936    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0918 13:22:50.735065    3941 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0918 13:22:50.745693    3941 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0918 13:22:50.745718    3941 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:22:50.745784    3941 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:22:50.785209    3941 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0918 13:22:50.785237    3941 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0918 13:22:50.785255    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0918 13:22:50.785259    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0918 13:22:50.793451    3941 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0918 13:22:50.793460    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0918 13:22:50.818246    3941 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0918 13:22:51.068628    3941 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0918 13:22:51.069310    3941 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:22:51.109508    3941 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0918 13:22:51.109569    3941 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:22:51.109738    3941 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:22:51.532520    3992 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:22:51.532605    3992 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50257-:22,hostfwd=tcp::50258-:2376,hostname=stopped-upgrade-367000 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/disk.qcow2
	I0918 13:22:51.575957    3992 main.go:141] libmachine: STDOUT: 
	I0918 13:22:51.575986    3992 main.go:141] libmachine: STDERR: 
	I0918 13:22:51.575993    3992 main.go:141] libmachine: Waiting for VM to start (ssh -p 50257 docker@127.0.0.1)...
	I0918 13:22:52.594311    3941 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.484573542s)
	I0918 13:22:52.594353    3941 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 13:22:52.594935    3941 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0918 13:22:52.600231    3941 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0918 13:22:52.600287    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0918 13:22:52.650969    3941 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0918 13:22:52.650994    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0918 13:22:52.900688    3941 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0918 13:22:52.900725    3941 cache_images.go:92] duration metric: took 2.714475417s to LoadCachedImages
	W0918 13:22:52.900763    3941 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0918 13:22:52.900768    3941 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0918 13:22:52.900824    3941 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-314000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 13:22:52.900909    3941 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 13:22:52.913919    3941 cni.go:84] Creating CNI manager for ""
	I0918 13:22:52.913944    3941 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:22:52.913954    3941 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 13:22:52.913966    3941 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-314000 NodeName:running-upgrade-314000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 13:22:52.914037    3941 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-314000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 13:22:52.914108    3941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0918 13:22:52.917711    3941 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 13:22:52.917755    3941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 13:22:52.921196    3941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0918 13:22:52.926101    3941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 13:22:52.931579    3941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0918 13:22:52.936458    3941 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0918 13:22:52.937856    3941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:22:53.027793    3941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 13:22:53.032585    3941 certs.go:68] Setting up /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000 for IP: 10.0.2.15
	I0918 13:22:53.032594    3941 certs.go:194] generating shared ca certs ...
	I0918 13:22:53.032606    3941 certs.go:226] acquiring lock for ca certs: {Name:mk6bf733e3b7a8269fa0cc74c7cf113ceab149df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:22:53.032773    3941 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key
	I0918 13:22:53.032821    3941 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key
	I0918 13:22:53.032828    3941 certs.go:256] generating profile certs ...
	I0918 13:22:53.032922    3941 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/client.key
	I0918 13:22:53.032941    3941 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key.c6930ede
	I0918 13:22:53.032950    3941 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt.c6930ede with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0918 13:22:53.107209    3941 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt.c6930ede ...
	I0918 13:22:53.107216    3941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt.c6930ede: {Name:mk9a4ddd13893e646499520f9e37a03e12f5d465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:22:53.107636    3941 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key.c6930ede ...
	I0918 13:22:53.107641    3941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key.c6930ede: {Name:mk424950dbf89558b44cb97b1c982ae4f8f49cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:22:53.107798    3941 certs.go:381] copying /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt.c6930ede -> /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt
	I0918 13:22:53.107941    3941 certs.go:385] copying /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key.c6930ede -> /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key
	I0918 13:22:53.108088    3941 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/proxy-client.key
	I0918 13:22:53.108224    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516.pem (1338 bytes)
	W0918 13:22:53.108252    3941 certs.go:480] ignoring /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516_empty.pem, impossibly tiny 0 bytes
	I0918 13:22:53.108257    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 13:22:53.108283    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem (1082 bytes)
	I0918 13:22:53.108310    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem (1123 bytes)
	I0918 13:22:53.108335    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem (1679 bytes)
	I0918 13:22:53.108388    3941 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem (1708 bytes)
	I0918 13:22:53.108706    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 13:22:53.116408    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 13:22:53.123692    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 13:22:53.130788    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 13:22:53.137792    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 13:22:53.144929    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 13:22:53.151706    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 13:22:53.158237    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 13:22:53.165532    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516.pem --> /usr/share/ca-certificates/1516.pem (1338 bytes)
	I0918 13:22:53.173147    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem --> /usr/share/ca-certificates/15162.pem (1708 bytes)
	I0918 13:22:53.180296    3941 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 13:22:53.186992    3941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 13:22:53.192280    3941 ssh_runner.go:195] Run: openssl version
	I0918 13:22:53.194072    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 13:22:53.197549    3941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:22:53.199335    3941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:22:53.199362    3941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:22:53.201364    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 13:22:53.204104    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1516.pem && ln -fs /usr/share/ca-certificates/1516.pem /etc/ssl/certs/1516.pem"
	I0918 13:22:53.207122    3941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1516.pem
	I0918 13:22:53.208633    3941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:53 /usr/share/ca-certificates/1516.pem
	I0918 13:22:53.208657    3941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1516.pem
	I0918 13:22:53.210629    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1516.pem /etc/ssl/certs/51391683.0"
	I0918 13:22:53.213822    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15162.pem && ln -fs /usr/share/ca-certificates/15162.pem /etc/ssl/certs/15162.pem"
	I0918 13:22:53.217405    3941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15162.pem
	I0918 13:22:53.219136    3941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:53 /usr/share/ca-certificates/15162.pem
	I0918 13:22:53.219164    3941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15162.pem
	I0918 13:22:53.221013    3941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15162.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 13:22:53.223856    3941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 13:22:53.225422    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 13:22:53.227536    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 13:22:53.229377    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 13:22:53.231242    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 13:22:53.233036    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 13:22:53.234780    3941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 13:22:53.236574    3941 kubeadm.go:392] StartCluster: {Name:running-upgrade-314000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50252 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-314000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0918 13:22:53.236645    3941 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 13:22:53.246851    3941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 13:22:53.250363    3941 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 13:22:53.250371    3941 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 13:22:53.250397    3941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 13:22:53.254161    3941 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:22:53.254404    3941 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-314000" does not appear in /Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:22:53.254455    3941 kubeconfig.go:62] /Users/jenkins/minikube-integration/19667-1040/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-314000" cluster setting kubeconfig missing "running-upgrade-314000" context setting]
	I0918 13:22:53.254591    3941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/kubeconfig: {Name:mkc39e19086c32e3258f75506afcbcc582926b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:22:53.256388    3941 kapi.go:59] client config for running-upgrade-314000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/client.key", CAFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105df9800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 13:22:53.256723    3941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 13:22:53.259571    3941 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-314000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0918 13:22:53.259576    3941 kubeadm.go:1160] stopping kube-system containers ...
	I0918 13:22:53.259626    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 13:22:53.270603    3941 docker.go:483] Stopping containers: [dffd1cd50f36 ceef4d344b48 e22e86a4ce24 6b1ac7de9044 48327b73c9bd 7e9f605d25c5 6632cd6218f3 b055a7066d86 ab5a367ffd08 cb6295d7aef9 b9a4e1994b07 9555ba0e451f 627ec8a706ce 1e9a779de08e]
	I0918 13:22:53.270673    3941 ssh_runner.go:195] Run: docker stop dffd1cd50f36 ceef4d344b48 e22e86a4ce24 6b1ac7de9044 48327b73c9bd 7e9f605d25c5 6632cd6218f3 b055a7066d86 ab5a367ffd08 cb6295d7aef9 b9a4e1994b07 9555ba0e451f 627ec8a706ce 1e9a779de08e
	I0918 13:22:53.281642    3941 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 13:22:53.383573    3941 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 13:22:53.388447    3941 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 18 20:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep 18 20:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 18 20:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 18 20:22 /etc/kubernetes/scheduler.conf
	
	I0918 13:22:53.388489    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/admin.conf
	I0918 13:22:53.392261    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:22:53.392290    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 13:22:53.395819    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/kubelet.conf
	I0918 13:22:53.398873    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:22:53.398896    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 13:22:53.402137    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/controller-manager.conf
	I0918 13:22:53.405481    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:22:53.405504    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 13:22:53.408759    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/scheduler.conf
	I0918 13:22:53.411557    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:22:53.411587    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 13:22:53.414248    3941 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 13:22:53.417454    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:22:53.441328    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:22:53.959122    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:22:54.170859    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:22:54.194174    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:22:54.214554    3941 api_server.go:52] waiting for apiserver process to appear ...
	I0918 13:22:54.214644    3941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:22:54.717024    3941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:22:55.216716    3941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:22:55.221064    3941 api_server.go:72] duration metric: took 1.006538916s to wait for apiserver process to appear ...
	I0918 13:22:55.221074    3941 api_server.go:88] waiting for apiserver healthz status ...
	I0918 13:22:55.221084    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:00.223017    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:00.223041    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:05.223122    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:05.223159    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:11.355424    3992 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/config.json ...
	I0918 13:23:11.355885    3992 machine.go:93] provisionDockerMachine start ...
	I0918 13:23:11.356065    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.356348    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.356359    3992 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 13:23:10.223581    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:10.223677    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:11.432782    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 13:23:11.432797    3992 buildroot.go:166] provisioning hostname "stopped-upgrade-367000"
	I0918 13:23:11.432878    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.433036    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.433044    3992 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-367000 && echo "stopped-upgrade-367000" | sudo tee /etc/hostname
	I0918 13:23:11.503725    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-367000
	
	I0918 13:23:11.503779    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.503884    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.503892    3992 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-367000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-367000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-367000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 13:23:11.570087    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 13:23:11.570101    3992 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19667-1040/.minikube CaCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19667-1040/.minikube}
	I0918 13:23:11.570110    3992 buildroot.go:174] setting up certificates
	I0918 13:23:11.570114    3992 provision.go:84] configureAuth start
	I0918 13:23:11.570121    3992 provision.go:143] copyHostCerts
	I0918 13:23:11.570186    3992 exec_runner.go:144] found /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.pem, removing ...
	I0918 13:23:11.570194    3992 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.pem
	I0918 13:23:11.570546    3992 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.pem (1082 bytes)
	I0918 13:23:11.570722    3992 exec_runner.go:144] found /Users/jenkins/minikube-integration/19667-1040/.minikube/cert.pem, removing ...
	I0918 13:23:11.570726    3992 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19667-1040/.minikube/cert.pem
	I0918 13:23:11.570785    3992 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/cert.pem (1123 bytes)
	I0918 13:23:11.570881    3992 exec_runner.go:144] found /Users/jenkins/minikube-integration/19667-1040/.minikube/key.pem, removing ...
	I0918 13:23:11.570886    3992 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19667-1040/.minikube/key.pem
	I0918 13:23:11.570930    3992 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/key.pem (1679 bytes)
	I0918 13:23:11.571011    3992 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-367000 san=[127.0.0.1 localhost minikube stopped-upgrade-367000]
	I0918 13:23:11.690341    3992 provision.go:177] copyRemoteCerts
	I0918 13:23:11.690385    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 13:23:11.690395    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	I0918 13:23:11.725329    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 13:23:11.731776    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 13:23:11.738825    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 13:23:11.746194    3992 provision.go:87] duration metric: took 176.074458ms to configureAuth
	I0918 13:23:11.746203    3992 buildroot.go:189] setting minikube options for container-runtime
	I0918 13:23:11.746308    3992 config.go:182] Loaded profile config "stopped-upgrade-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0918 13:23:11.746342    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.746432    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.746437    3992 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 13:23:11.811870    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0918 13:23:11.811880    3992 buildroot.go:70] root file system type: tmpfs
	I0918 13:23:11.811929    3992 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 13:23:11.811996    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.812114    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.812147    3992 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 13:23:11.880743    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 13:23:11.880812    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.880930    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.880939    3992 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 13:23:12.233998    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0918 13:23:12.234013    3992 machine.go:96] duration metric: took 878.141292ms to provisionDockerMachine
	I0918 13:23:12.234019    3992 start.go:293] postStartSetup for "stopped-upgrade-367000" (driver="qemu2")
	I0918 13:23:12.234025    3992 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 13:23:12.234079    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 13:23:12.234087    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	I0918 13:23:12.269657    3992 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 13:23:12.270864    3992 info.go:137] Remote host: Buildroot 2021.02.12
	I0918 13:23:12.270872    3992 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19667-1040/.minikube/addons for local assets ...
	I0918 13:23:12.270958    3992 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19667-1040/.minikube/files for local assets ...
	I0918 13:23:12.271084    3992 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem -> 15162.pem in /etc/ssl/certs
	I0918 13:23:12.271206    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 13:23:12.274208    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem --> /etc/ssl/certs/15162.pem (1708 bytes)
	I0918 13:23:12.281601    3992 start.go:296] duration metric: took 47.577083ms for postStartSetup
	I0918 13:23:12.281617    3992 fix.go:56] duration metric: took 20.761826459s for fixHost
	I0918 13:23:12.281666    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:12.281780    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:12.281789    3992 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 13:23:12.348653    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726690992.045376212
	
	I0918 13:23:12.348664    3992 fix.go:216] guest clock: 1726690992.045376212
	I0918 13:23:12.348668    3992 fix.go:229] Guest: 2024-09-18 13:23:12.045376212 -0700 PDT Remote: 2024-09-18 13:23:12.281619 -0700 PDT m=+20.887712293 (delta=-236.242788ms)
	I0918 13:23:12.348684    3992 fix.go:200] guest clock delta is within tolerance: -236.242788ms
	I0918 13:23:12.348687    3992 start.go:83] releasing machines lock for "stopped-upgrade-367000", held for 20.828906083s
	I0918 13:23:12.348769    3992 ssh_runner.go:195] Run: cat /version.json
	I0918 13:23:12.348771    3992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 13:23:12.348777    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	I0918 13:23:12.348787    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	W0918 13:23:12.349509    3992 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50543->127.0.0.1:50257: write: broken pipe
	I0918 13:23:12.349527    3992 retry.go:31] will retry after 212.276912ms: ssh: handshake failed: write tcp 127.0.0.1:50543->127.0.0.1:50257: write: broken pipe
	W0918 13:23:12.383571    3992 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0918 13:23:12.383637    3992 ssh_runner.go:195] Run: systemctl --version
	I0918 13:23:12.385717    3992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 13:23:12.387398    3992 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 13:23:12.387436    3992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0918 13:23:12.390943    3992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0918 13:23:12.396675    3992 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 13:23:12.396700    3992 start.go:495] detecting cgroup driver to use...
	I0918 13:23:12.396777    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 13:23:12.403985    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0918 13:23:12.407380    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 13:23:12.410848    3992 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 13:23:12.410913    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 13:23:12.414648    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 13:23:12.418408    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 13:23:12.421874    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 13:23:12.425244    3992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 13:23:12.428834    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 13:23:12.432267    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0918 13:23:12.435427    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0918 13:23:12.439124    3992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 13:23:12.442356    3992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 13:23:12.445373    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:12.523311    3992 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 13:23:12.534315    3992 start.go:495] detecting cgroup driver to use...
	I0918 13:23:12.534386    3992 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 13:23:12.539172    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 13:23:12.544112    3992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 13:23:12.554233    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 13:23:12.559317    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 13:23:12.564489    3992 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 13:23:12.602236    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 13:23:12.640023    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 13:23:12.645675    3992 ssh_runner.go:195] Run: which cri-dockerd
	I0918 13:23:12.646915    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 13:23:12.649951    3992 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0918 13:23:12.654944    3992 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 13:23:12.740580    3992 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 13:23:12.826880    3992 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 13:23:12.826941    3992 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0918 13:23:12.832323    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:12.906921    3992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 13:23:14.025103    3992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.118194208s)
	I0918 13:23:14.025175    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0918 13:23:14.031260    3992 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0918 13:23:14.037047    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 13:23:14.041782    3992 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 13:23:14.115953    3992 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 13:23:14.191517    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:14.251915    3992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 13:23:14.258424    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 13:23:14.262887    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:14.329393    3992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0918 13:23:14.373636    3992 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 13:23:14.373723    3992 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 13:23:14.377157    3992 start.go:563] Will wait 60s for crictl version
	I0918 13:23:14.377223    3992 ssh_runner.go:195] Run: which crictl
	I0918 13:23:14.378603    3992 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 13:23:14.393629    3992 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0918 13:23:14.393735    3992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 13:23:14.412438    3992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 13:23:14.431614    3992 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0918 13:23:14.431770    3992 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0918 13:23:14.433010    3992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 13:23:14.436405    3992 kubeadm.go:883] updating cluster {Name:stopped-upgrade-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50335 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0918 13:23:14.436455    3992 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0918 13:23:14.436515    3992 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 13:23:14.447684    3992 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 13:23:14.447696    3992 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0918 13:23:14.447750    3992 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 13:23:14.451588    3992 ssh_runner.go:195] Run: which lz4
	I0918 13:23:14.453316    3992 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 13:23:14.454674    3992 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 13:23:14.454698    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0918 13:23:15.439085    3992 docker.go:649] duration metric: took 985.855583ms to copy over tarball
	I0918 13:23:15.439155    3992 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 13:23:15.224367    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:15.224390    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:16.601606    3992 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162461875s)
	I0918 13:23:16.601620    3992 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 13:23:16.617652    3992 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 13:23:16.620817    3992 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0918 13:23:16.625860    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:16.704563    3992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 13:23:18.404206    3992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.699671417s)
	I0918 13:23:18.404308    3992 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 13:23:18.418307    3992 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 13:23:18.418319    3992 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0918 13:23:18.418326    3992 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 13:23:18.422241    3992 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0918 13:23:18.425064    3992 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:23:18.427779    3992 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:23:18.427930    3992 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0918 13:23:18.429919    3992 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:23:18.430008    3992 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:23:18.431638    3992 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:23:18.431758    3992 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:23:18.433468    3992 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:23:18.433822    3992 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:23:18.434942    3992 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:23:18.435440    3992 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:23:18.436786    3992 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:23:18.436821    3992 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:23:18.438740    3992 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:23:18.439944    3992 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:23:18.806151    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0918 13:23:18.817383    3992 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0918 13:23:18.817411    3992 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0918 13:23:18.817480    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0918 13:23:18.827069    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0918 13:23:18.827192    3992 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0918 13:23:18.828836    3992 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0918 13:23:18.828847    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0918 13:23:18.832887    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:23:18.838251    3992 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0918 13:23:18.838265    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0918 13:23:18.838737    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0918 13:23:18.846330    3992 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0918 13:23:18.846352    3992 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:23:18.846423    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:23:18.868823    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W0918 13:23:18.878834    3992 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0918 13:23:18.878972    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:23:18.880770    3992 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0918 13:23:18.880814    3992 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0918 13:23:18.880821    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0918 13:23:18.880834    3992 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:23:18.880877    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0918 13:23:18.885685    3992 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0918 13:23:18.885703    3992 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:23:18.885768    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:23:18.895998    3992 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0918 13:23:18.896020    3992 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:23:18.896091    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:23:18.898353    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0918 13:23:18.911870    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:23:18.916899    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0918 13:23:18.917870    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0918 13:23:18.917983    3992 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0918 13:23:18.926123    3992 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0918 13:23:18.926148    3992 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:23:18.926161    3992 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0918 13:23:18.926182    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0918 13:23:18.926219    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:23:18.945430    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:23:18.965295    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0918 13:23:18.969440    3992 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0918 13:23:18.969452    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0918 13:23:18.977118    3992 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0918 13:23:18.977143    3992 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:23:18.977210    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:23:19.013648    3992 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0918 13:23:19.013680    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0918 13:23:19.289299    3992 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0918 13:23:19.289661    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:23:19.317494    3992 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0918 13:23:19.317551    3992 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:23:19.317682    3992 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:23:19.340927    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 13:23:19.341074    3992 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0918 13:23:19.342782    3992 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0918 13:23:19.342797    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0918 13:23:19.372231    3992 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0918 13:23:19.372245    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0918 13:23:19.612650    3992 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0918 13:23:19.612689    3992 cache_images.go:92] duration metric: took 1.194384667s to LoadCachedImages
	W0918 13:23:19.612726    3992 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0918 13:23:19.612735    3992 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0918 13:23:19.612793    3992 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-367000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 13:23:19.612868    3992 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 13:23:19.626171    3992 cni.go:84] Creating CNI manager for ""
	I0918 13:23:19.626182    3992 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:23:19.626189    3992 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 13:23:19.626199    3992 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-367000 NodeName:stopped-upgrade-367000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 13:23:19.626268    3992 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-367000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 13:23:19.626329    3992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0918 13:23:19.629719    3992 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 13:23:19.629755    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 13:23:19.632299    3992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0918 13:23:19.637098    3992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 13:23:19.642145    3992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0918 13:23:19.647532    3992 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0918 13:23:19.648858    3992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 13:23:19.652267    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:19.730801    3992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 13:23:19.737280    3992 certs.go:68] Setting up /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000 for IP: 10.0.2.15
	I0918 13:23:19.737289    3992 certs.go:194] generating shared ca certs ...
	I0918 13:23:19.737299    3992 certs.go:226] acquiring lock for ca certs: {Name:mk6bf733e3b7a8269fa0cc74c7cf113ceab149df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:23:19.737512    3992 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key
	I0918 13:23:19.737551    3992 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key
	I0918 13:23:19.737559    3992 certs.go:256] generating profile certs ...
	I0918 13:23:19.737649    3992 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.key
	I0918 13:23:19.737668    3992 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key.f132c78f
	I0918 13:23:19.737689    3992 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt.f132c78f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0918 13:23:19.966707    3992 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt.f132c78f ...
	I0918 13:23:19.966723    3992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt.f132c78f: {Name:mke4091d5b8545646fea833379b021649e2b0bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:23:19.968287    3992 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key.f132c78f ...
	I0918 13:23:19.968295    3992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key.f132c78f: {Name:mkb798a6a3d753260ffed16c1ed60a7be2f3fb02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:23:19.969191    3992 certs.go:381] copying /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt.f132c78f -> /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt
	I0918 13:23:19.969388    3992 certs.go:385] copying /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key.f132c78f -> /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key
	I0918 13:23:19.969560    3992 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/proxy-client.key
	I0918 13:23:19.969708    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516.pem (1338 bytes)
	W0918 13:23:19.969733    3992 certs.go:480] ignoring /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516_empty.pem, impossibly tiny 0 bytes
	I0918 13:23:19.969738    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 13:23:19.969759    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem (1082 bytes)
	I0918 13:23:19.969778    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem (1123 bytes)
	I0918 13:23:19.969801    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem (1679 bytes)
	I0918 13:23:19.969842    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem (1708 bytes)
	I0918 13:23:19.970188    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 13:23:19.977470    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 13:23:19.984035    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 13:23:19.990960    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 13:23:19.998430    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 13:23:20.006013    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 13:23:20.013341    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 13:23:20.020061    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 13:23:20.026994    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 13:23:20.034204    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516.pem --> /usr/share/ca-certificates/1516.pem (1338 bytes)
	I0918 13:23:20.041279    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem --> /usr/share/ca-certificates/15162.pem (1708 bytes)
	I0918 13:23:20.047849    3992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 13:23:20.052866    3992 ssh_runner.go:195] Run: openssl version
	I0918 13:23:20.054762    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 13:23:20.058436    3992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:23:20.060012    3992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:23:20.060037    3992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:23:20.061833    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 13:23:20.064736    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1516.pem && ln -fs /usr/share/ca-certificates/1516.pem /etc/ssl/certs/1516.pem"
	I0918 13:23:20.067550    3992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1516.pem
	I0918 13:23:20.069008    3992 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:53 /usr/share/ca-certificates/1516.pem
	I0918 13:23:20.069034    3992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1516.pem
	I0918 13:23:20.070710    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1516.pem /etc/ssl/certs/51391683.0"
	I0918 13:23:20.074099    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15162.pem && ln -fs /usr/share/ca-certificates/15162.pem /etc/ssl/certs/15162.pem"
	I0918 13:23:20.077198    3992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15162.pem
	I0918 13:23:20.078525    3992 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:53 /usr/share/ca-certificates/15162.pem
	I0918 13:23:20.078544    3992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15162.pem
	I0918 13:23:20.080403    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15162.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 13:23:20.083395    3992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 13:23:20.084943    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 13:23:20.086774    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 13:23:20.088656    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 13:23:20.090659    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 13:23:20.092494    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 13:23:20.094296    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 13:23:20.096295    3992 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50335 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0918 13:23:20.096375    3992 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 13:23:20.106326    3992 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 13:23:20.109857    3992 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 13:23:20.109868    3992 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 13:23:20.109897    3992 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 13:23:20.112448    3992 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:23:20.112736    3992 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-367000" does not appear in /Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:23:20.112831    3992 kubeconfig.go:62] /Users/jenkins/minikube-integration/19667-1040/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-367000" cluster setting kubeconfig missing "stopped-upgrade-367000" context setting]
	I0918 13:23:20.113664    3992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/kubeconfig: {Name:mkc39e19086c32e3258f75506afcbcc582926b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:23:20.114613    3992 kapi.go:59] client config for stopped-upgrade-367000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.key", CAFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e05800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 13:23:20.114944    3992 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 13:23:20.117943    3992 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-367000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0918 13:23:20.117948    3992 kubeadm.go:1160] stopping kube-system containers ...
	I0918 13:23:20.117998    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 13:23:20.128716    3992 docker.go:483] Stopping containers: [17f70e497468 7337a97ddd7b 014c9f589a4f 56f7c42e2286 f2971c3f4847 2d9c69459424 5d7f652712f1 b19830618519]
	I0918 13:23:20.128806    3992 ssh_runner.go:195] Run: docker stop 17f70e497468 7337a97ddd7b 014c9f589a4f 56f7c42e2286 f2971c3f4847 2d9c69459424 5d7f652712f1 b19830618519
	I0918 13:23:20.139397    3992 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 13:23:20.144925    3992 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 13:23:20.147609    3992 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 13:23:20.147615    3992 kubeadm.go:157] found existing configuration files:
	
	I0918 13:23:20.147638    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/admin.conf
	I0918 13:23:20.150524    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 13:23:20.150550    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 13:23:20.153361    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/kubelet.conf
	I0918 13:23:20.155769    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 13:23:20.155799    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 13:23:20.158843    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/controller-manager.conf
	I0918 13:23:20.161947    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 13:23:20.161971    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 13:23:20.164635    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/scheduler.conf
	I0918 13:23:20.167153    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 13:23:20.167174    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 13:23:20.170066    3992 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 13:23:20.172640    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:23:20.195852    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:23:20.529646    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:23:20.664669    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:23:20.687390    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:23:20.715416    3992 api_server.go:52] waiting for apiserver process to appear ...
	I0918 13:23:20.715495    3992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:23:21.217601    3992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:23:20.224939    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:20.224957    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:21.716983    3992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:23:21.721781    3992 api_server.go:72] duration metric: took 1.006391958s to wait for apiserver process to appear ...
	I0918 13:23:21.721792    3992 api_server.go:88] waiting for apiserver healthz status ...
	I0918 13:23:21.721802    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:25.225640    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:25.225682    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:26.723778    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:26.723816    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:30.226715    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:30.226756    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:31.724327    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:31.724373    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:35.228129    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:35.228154    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:36.724827    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:36.724881    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:40.229787    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:40.229823    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:41.725839    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:41.725878    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:45.231990    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:45.232031    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:46.726651    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:46.726684    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:50.234222    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:50.234277    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:51.727731    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:51.727783    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:55.236044    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:55.236169    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:23:55.248125    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:23:55.248227    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:23:55.258655    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:23:55.258732    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:23:55.269332    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:23:55.269419    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:23:55.279596    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:23:55.279684    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:23:55.290090    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:23:55.290172    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:23:55.303206    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:23:55.303282    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:23:55.313239    3941 logs.go:276] 0 containers: []
	W0918 13:23:55.313249    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:23:55.313310    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:23:55.325544    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:23:55.325572    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:23:55.325582    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:23:55.402809    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:23:55.402820    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:23:55.414403    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:23:55.414418    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:23:55.427351    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:23:55.427359    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:23:55.441220    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:23:55.441230    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:23:55.452411    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:23:55.452423    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:23:55.468410    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:23:55.468420    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:23:55.480057    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:23:55.480072    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:23:55.506915    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:23:55.506923    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:23:55.544390    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:23:55.544399    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:23:55.557567    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:23:55.557578    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:23:55.575597    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:23:55.575609    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:23:55.586846    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:23:55.586862    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:23:55.598107    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:23:55.598118    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:23:55.602481    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:23:55.602488    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:23:55.616606    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:23:55.616617    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:23:55.633829    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:23:55.633841    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:23:56.729230    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:56.729273    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:58.150808    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:01.731078    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:01.731122    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:03.152952    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:03.153203    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:03.176664    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:03.176825    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:03.194886    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:03.194973    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:03.207887    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:03.207974    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:03.223187    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:03.223266    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:03.233558    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:03.233665    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:03.243916    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:03.243991    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:03.253924    3941 logs.go:276] 0 containers: []
	W0918 13:24:03.253939    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:03.254002    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:03.264826    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:03.264843    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:03.264856    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:03.279235    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:03.279246    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:03.297895    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:03.297906    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:03.309651    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:03.309660    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:03.323860    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:03.323875    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:03.338180    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:03.338190    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:03.362768    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:03.362778    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:03.398720    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:03.398727    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:03.435501    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:03.435511    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:03.453416    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:03.453428    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:03.471119    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:03.471129    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:03.482215    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:03.482225    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:03.500194    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:03.500205    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:03.514820    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:03.514829    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:03.531391    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:03.531403    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:03.543991    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:03.544002    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:03.548739    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:03.548745    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:06.062993    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:06.733364    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:06.733414    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:11.065635    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:11.065953    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:11.094235    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:11.094390    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:11.111704    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:11.111806    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:11.125296    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:11.125385    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:11.137188    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:11.137274    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:11.147604    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:11.147692    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:11.158476    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:11.158555    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:11.168766    3941 logs.go:276] 0 containers: []
	W0918 13:24:11.168780    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:11.168846    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:11.179153    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:11.179172    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:11.179177    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:11.204499    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:11.204508    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:11.215863    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:11.215874    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:11.231204    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:11.231215    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:11.245753    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:11.245764    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:11.259629    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:11.259642    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:11.272045    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:11.272056    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:11.286130    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:11.286141    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:11.300848    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:11.300858    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:11.316909    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:11.316920    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:11.334005    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:11.334015    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:11.345488    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:11.345498    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:11.349828    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:11.349835    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:11.361262    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:11.361272    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:11.397047    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:11.397057    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:11.409116    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:11.409127    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:11.420157    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:11.420170    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:11.735619    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:11.735662    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:13.958178    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:16.738004    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:16.738102    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:18.960444    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:18.960612    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:18.976798    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:18.976903    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:18.989577    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:18.989664    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:19.000933    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:19.001022    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:19.011617    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:19.011698    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:19.021998    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:19.022081    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:19.032889    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:19.032979    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:19.044721    3941 logs.go:276] 0 containers: []
	W0918 13:24:19.044735    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:19.044819    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:19.057104    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:19.057124    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:19.057129    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:19.069256    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:19.069270    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:19.081103    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:19.081114    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:19.106220    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:19.106227    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:19.125379    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:19.125392    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:19.161370    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:19.161384    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:19.174213    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:19.174227    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:19.189064    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:19.189076    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:19.201876    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:19.201887    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:19.222173    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:19.222187    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:19.233574    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:19.233585    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:19.245303    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:19.245317    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:19.249878    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:19.249885    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:19.263441    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:19.263452    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:19.277693    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:19.277703    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:19.289587    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:19.289597    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:19.301330    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:19.301343    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:21.841412    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:21.739257    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:21.739530    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:21.761221    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:24:21.761349    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:21.778666    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:24:21.778770    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:21.790966    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:24:21.791053    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:21.802245    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:24:21.802329    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:21.812733    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:24:21.812812    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:21.827925    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:24:21.828012    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:21.838677    3992 logs.go:276] 0 containers: []
	W0918 13:24:21.838690    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:21.838754    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:21.848669    3992 logs.go:276] 0 containers: []
	W0918 13:24:21.848680    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:24:21.848697    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:21.848703    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:21.886366    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:21.886375    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:21.890351    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:24:21.890357    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:24:21.904564    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:24:21.904577    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:24:21.920475    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:24:21.920485    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:24:21.937694    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:21.937703    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:22.016840    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:24:22.016853    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:24:22.035108    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:24:22.035118    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:24:22.048942    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:24:22.048952    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:24:22.062090    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:24:22.062100    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:24:22.088582    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:24:22.088593    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:24:22.101426    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:22.101440    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:22.126663    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:24:22.126671    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:22.137976    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:24:22.137987    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:24:22.152965    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:24:22.152976    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:24:24.671158    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:26.843491    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:26.843689    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:26.862528    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:26.862650    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:26.876746    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:26.876831    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:26.888440    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:26.888526    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:26.899444    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:26.899534    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:26.912662    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:26.912739    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:26.925257    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:26.925345    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:26.935410    3941 logs.go:276] 0 containers: []
	W0918 13:24:26.935422    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:26.935500    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:26.951393    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:26.951414    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:26.951420    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:26.965865    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:26.965875    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:26.979004    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:26.979020    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:26.996224    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:26.996237    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:27.008493    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:27.008503    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:27.022713    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:27.022722    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:27.033938    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:27.033949    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:27.045627    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:27.045639    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:27.050103    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:27.050112    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:27.061376    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:27.061389    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:27.075170    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:27.075181    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:27.087849    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:27.087866    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:27.114686    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:27.114699    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:27.126032    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:27.126048    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:27.137710    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:27.137722    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:27.164089    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:27.164096    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:29.672303    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:29.672560    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:29.690693    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:24:29.690815    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:29.705010    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:24:29.705095    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:29.716910    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:24:29.716985    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:29.727564    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:24:29.727652    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:29.737571    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:24:29.737656    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:29.755728    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:24:29.755805    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:29.767557    3992 logs.go:276] 0 containers: []
	W0918 13:24:29.767568    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:29.767640    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:29.777525    3992 logs.go:276] 0 containers: []
	W0918 13:24:29.777536    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:24:29.777544    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:29.777549    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:29.801807    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:24:29.801815    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:29.818523    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:29.818534    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:29.855618    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:24:29.855633    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:24:29.870341    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:24:29.870351    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:24:29.883536    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:29.883546    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:29.921066    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:24:29.921074    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:24:29.935526    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:24:29.935538    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:24:29.947451    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:24:29.947464    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:24:29.961616    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:24:29.961626    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:24:29.975856    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:24:29.975867    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:24:30.000963    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:24:30.000983    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:24:30.012687    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:24:30.012699    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:24:30.024323    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:24:30.024337    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:24:30.042241    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:30.042251    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:27.202201    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:27.202213    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:29.746138    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:32.548685    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:34.748187    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:34.748374    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:34.762053    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:34.762155    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:34.773379    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:34.773471    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:34.784068    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:34.784159    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:34.794704    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:34.794790    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:34.807906    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:34.807990    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:34.818745    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:34.818831    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:34.828546    3941 logs.go:276] 0 containers: []
	W0918 13:24:34.828559    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:34.828627    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:34.838793    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:34.838812    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:34.838818    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:34.843047    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:34.843055    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:34.856362    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:34.856371    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:34.870604    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:34.870614    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:34.885786    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:34.885796    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:34.896999    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:34.897014    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:34.923079    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:34.923090    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:34.936684    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:34.936695    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:34.947822    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:34.947833    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:34.968961    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:34.968976    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:34.986792    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:34.986808    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:34.998785    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:34.998799    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:35.037608    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:35.037624    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:35.050625    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:35.050639    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:35.062387    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:35.062400    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:35.073755    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:35.073769    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:35.085491    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:35.085504    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:37.551412    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:37.552025    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:37.593074    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:24:37.593237    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:37.614114    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:24:37.614221    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:37.629062    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:24:37.629147    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:37.641917    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:24:37.641995    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:37.652957    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:24:37.653028    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:37.663969    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:24:37.664056    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:37.674454    3992 logs.go:276] 0 containers: []
	W0918 13:24:37.674466    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:37.674537    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:37.684481    3992 logs.go:276] 0 containers: []
	W0918 13:24:37.684494    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:24:37.684501    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:24:37.684506    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:24:37.696574    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:24:37.696583    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:24:37.709322    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:37.709332    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:37.714073    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:37.714080    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:37.748514    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:24:37.748525    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:24:37.763241    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:24:37.763252    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:24:37.778613    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:24:37.778627    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:24:37.796293    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:24:37.796303    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:24:37.813805    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:24:37.813815    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:24:37.839044    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:24:37.839053    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:24:37.850840    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:24:37.850854    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:24:37.868737    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:37.868747    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:37.894798    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:24:37.894806    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:37.906177    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:37.906193    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:37.944874    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:24:37.944883    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:24:40.464802    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:37.625474    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:45.467054    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:45.467255    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:45.485895    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:24:45.485997    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:45.507765    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:24:45.507844    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:45.518856    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:24:45.518928    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:45.529334    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:24:45.529405    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:45.539359    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:24:45.539439    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:45.549664    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:24:45.549743    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:45.559582    3992 logs.go:276] 0 containers: []
	W0918 13:24:45.559596    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:45.559659    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:45.569889    3992 logs.go:276] 0 containers: []
	W0918 13:24:45.569903    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:24:45.569910    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:24:45.569915    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:45.582200    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:24:45.582211    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:24:45.606657    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:24:45.606672    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:24:45.631579    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:45.631594    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:45.665426    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:24:45.665435    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:24:45.677406    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:24:45.677415    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:24:45.700815    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:45.700831    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:45.739217    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:45.739227    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:45.743301    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:45.743310    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:45.768164    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:24:45.768173    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:24:45.782810    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:24:45.782826    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:24:45.799034    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:24:45.799045    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:24:45.811214    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:24:45.811225    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:24:45.824849    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:24:45.824859    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:24:45.838353    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:24:45.838366    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:24:42.627639    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:42.627964    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:42.660591    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:42.660742    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:42.677935    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:42.678027    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:42.690907    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:42.690999    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:42.702672    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:42.702756    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:42.713414    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:42.713491    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:42.724052    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:42.724159    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:42.738485    3941 logs.go:276] 0 containers: []
	W0918 13:24:42.738495    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:42.738557    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:42.749731    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:42.749750    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:42.749755    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:42.762248    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:42.762259    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:42.773603    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:42.773613    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:42.797919    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:42.797929    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:42.812595    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:42.812611    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:42.826860    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:42.826874    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:42.838281    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:42.838292    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:42.853292    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:42.853310    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:42.864742    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:42.864756    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:42.876703    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:42.876715    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:42.911112    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:42.911124    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:42.924180    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:42.924190    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:42.942488    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:42.942502    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:42.956792    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:42.956805    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:42.961614    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:42.961622    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:42.979249    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:42.979262    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:42.991111    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:42.991121    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:45.529275    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:48.351734    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:50.531469    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:50.532039    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:50.572474    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:50.572641    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:50.595323    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:50.595452    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:50.610799    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:50.610887    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:50.623459    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:50.623552    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:50.634352    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:50.634433    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:50.646895    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:50.646979    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:50.657383    3941 logs.go:276] 0 containers: []
	W0918 13:24:50.657393    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:50.657463    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:50.672159    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:50.672177    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:50.672182    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:50.687208    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:50.687221    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:50.699228    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:50.699238    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:50.710661    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:50.710673    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:50.722805    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:50.722818    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:50.759169    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:50.759183    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:50.794138    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:50.794152    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:50.808310    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:50.808322    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:50.820936    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:50.820946    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:50.847242    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:50.847256    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:50.851730    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:50.851739    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:50.871408    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:50.871419    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:24:50.882589    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:50.882602    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:50.894699    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:50.894709    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:50.912036    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:50.912046    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:50.924149    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:50.924159    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:50.935247    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:50.935258    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:53.353959    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:53.354165    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:53.370107    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:24:53.370216    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:53.382086    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:24:53.382172    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:53.392398    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:24:53.392488    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:53.402781    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:24:53.402864    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:53.414559    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:24:53.414633    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:53.425421    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:24:53.425504    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:53.435772    3992 logs.go:276] 0 containers: []
	W0918 13:24:53.435783    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:53.435854    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:53.445162    3992 logs.go:276] 0 containers: []
	W0918 13:24:53.445174    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:24:53.445181    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:53.445189    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:53.483862    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:53.483871    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:53.517968    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:24:53.517979    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:24:53.532149    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:24:53.532160    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:24:53.549984    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:24:53.549994    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:24:53.561818    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:24:53.561828    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:53.573211    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:24:53.573221    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:24:53.584841    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:24:53.584852    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:24:53.596818    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:24:53.596827    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:24:53.609030    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:24:53.609041    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:24:53.622125    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:53.622134    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:53.645164    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:53.645171    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:53.649688    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:24:53.649694    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:24:53.663654    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:24:53.663667    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:24:53.687821    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:24:53.687834    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:24:56.211746    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:53.458554    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:01.214027    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:01.214285    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:01.231743    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:01.231844    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:01.245748    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:01.245838    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:01.257007    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:01.257084    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:01.267539    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:01.267624    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:01.278784    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:01.278865    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:01.289480    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:01.289555    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:01.300118    3992 logs.go:276] 0 containers: []
	W0918 13:25:01.300130    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:01.300197    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:01.310083    3992 logs.go:276] 0 containers: []
	W0918 13:25:01.310100    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:01.310108    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:01.310115    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:01.323635    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:01.323646    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:01.338279    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:01.338289    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:01.350078    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:01.350088    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:01.363192    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:01.363202    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:01.388395    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:01.388404    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:58.460612    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:58.460777    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:58.472293    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:24:58.472387    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:58.485738    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:24:58.485815    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:58.500229    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:24:58.500305    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:58.510475    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:24:58.510567    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:58.520900    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:24:58.520992    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:58.531037    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:24:58.531122    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:58.541122    3941 logs.go:276] 0 containers: []
	W0918 13:24:58.541139    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:58.541215    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:58.551638    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:24:58.551655    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:58.551660    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:58.588016    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:24:58.588028    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:24:58.602201    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:24:58.602209    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:24:58.617806    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:24:58.617820    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:24:58.629613    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:58.629623    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:58.653456    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:58.653464    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:58.690772    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:24:58.690780    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:24:58.702832    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:24:58.702845    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:24:58.721983    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:24:58.721999    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:24:58.733425    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:24:58.733439    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:24:58.746456    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:24:58.746467    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:24:58.763135    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:24:58.763145    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:24:58.775216    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:24:58.775231    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:24:58.786538    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:24:58.786551    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:58.798683    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:58.798698    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:58.802870    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:24:58.802876    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:24:58.816539    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:24:58.816550    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:01.329873    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:01.422154    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:01.422164    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:01.435663    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:01.435672    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:01.446603    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:01.446615    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:01.458272    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:01.458283    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:01.496919    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:01.496939    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:01.514368    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:01.514378    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:01.525705    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:01.525720    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:01.529965    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:01.529975    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:01.554393    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:01.554402    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:04.068372    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:06.331924    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:06.332113    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:06.350583    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:06.350693    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:06.364698    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:06.364818    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:06.377038    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:06.377129    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:06.388059    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:06.388141    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:06.398503    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:06.398585    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:06.408880    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:06.408950    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:06.419662    3941 logs.go:276] 0 containers: []
	W0918 13:25:06.419674    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:06.419731    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:06.430447    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:06.430465    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:06.430470    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:06.435020    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:06.435027    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:06.449449    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:06.449459    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:06.461814    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:06.461829    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:06.497688    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:06.497699    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:06.513739    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:06.513752    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:06.529863    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:06.529873    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:06.548426    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:06.548436    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:06.566083    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:06.566092    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:06.577628    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:06.577641    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:06.602474    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:06.602488    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:06.640303    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:06.640313    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:06.653430    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:06.653441    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:06.665719    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:06.665730    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:06.677829    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:06.677842    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:06.689606    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:06.689622    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:06.701589    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:06.701599    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:09.070506    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:09.071266    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:09.102026    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:09.102182    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:09.131929    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:09.132022    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:09.144481    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:09.144561    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:09.156003    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:09.156100    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:09.166476    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:09.166572    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:09.178854    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:09.178946    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:09.189667    3992 logs.go:276] 0 containers: []
	W0918 13:25:09.189681    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:09.189750    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:09.199778    3992 logs.go:276] 0 containers: []
	W0918 13:25:09.199788    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:09.199795    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:09.199801    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:09.241290    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:09.241302    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:09.252818    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:09.252831    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:09.273589    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:09.273600    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:09.309892    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:09.309901    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:09.323633    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:09.323643    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:09.339406    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:09.339417    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:09.370291    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:09.370306    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:09.374639    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:09.374647    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:09.388600    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:09.388611    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:09.412405    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:09.412419    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:09.427127    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:09.427137    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:09.439455    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:09.439467    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:09.451983    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:09.451995    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:09.476741    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:09.476749    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:09.215317    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:11.990983    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:14.217359    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:14.217470    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:14.229469    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:14.229560    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:14.241016    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:14.241098    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:14.251785    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:14.251864    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:14.262992    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:14.263071    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:14.279117    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:14.279200    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:14.290137    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:14.290212    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:14.307484    3941 logs.go:276] 0 containers: []
	W0918 13:25:14.307499    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:14.307579    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:14.319367    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:14.319392    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:14.319397    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:14.332062    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:14.332077    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:14.347038    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:14.347049    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:14.358873    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:14.358887    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:14.371235    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:14.371246    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:14.387120    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:14.387133    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:14.412012    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:14.412030    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:14.449705    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:14.449719    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:14.454377    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:14.454389    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:14.492790    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:14.492804    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:14.507156    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:14.507169    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:14.518490    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:14.518502    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:14.532501    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:14.532513    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:14.548405    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:14.548417    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:14.560214    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:14.560224    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:14.577327    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:14.577343    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:14.590047    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:14.590059    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:17.102219    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:16.993589    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:16.994092    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:17.037128    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:17.037285    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:17.054726    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:17.054843    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:17.068236    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:17.068331    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:17.079853    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:17.079943    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:17.090792    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:17.090868    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:17.102190    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:17.102277    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:17.112654    3992 logs.go:276] 0 containers: []
	W0918 13:25:17.112665    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:17.112741    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:17.123164    3992 logs.go:276] 0 containers: []
	W0918 13:25:17.123176    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:17.123184    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:17.123189    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:17.137506    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:17.137517    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:17.152974    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:17.152984    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:17.178132    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:17.178140    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:17.216042    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:17.216049    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:17.233712    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:17.233725    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:17.251568    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:17.251578    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:17.265580    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:17.265589    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:17.277588    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:17.277598    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:17.290867    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:17.290881    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:17.302106    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:17.302118    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:17.347629    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:17.347647    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:17.373204    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:17.373215    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:17.384963    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:17.384974    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:17.396721    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:17.396731    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:19.903172    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:22.104348    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:22.104733    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:22.134110    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:22.134268    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:22.152142    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:22.152255    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:22.166090    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:22.166180    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:24.904452    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:24.904683    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:24.921378    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:24.921466    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:24.933369    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:24.933483    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:24.943762    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:24.943847    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:24.955024    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:24.955099    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:24.965706    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:24.965789    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:24.978011    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:24.978097    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:24.988163    3992 logs.go:276] 0 containers: []
	W0918 13:25:24.988174    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:24.988239    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:24.999946    3992 logs.go:276] 0 containers: []
	W0918 13:25:24.999962    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:24.999972    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:24.999977    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:25.005041    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:25.005048    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:25.016661    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:25.016678    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:25.032247    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:25.032257    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:25.047287    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:25.047297    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:25.071349    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:25.071357    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:25.082715    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:25.082725    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:25.121467    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:25.121477    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:25.156203    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:25.156216    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:25.174720    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:25.174731    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:25.190760    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:25.190773    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:25.208348    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:25.208359    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:25.221679    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:25.221690    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:25.253051    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:25.253063    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:25.267298    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:25.267308    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:22.177712    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:22.177806    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:22.188206    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:22.188295    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:22.206303    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:22.206386    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:22.216566    3941 logs.go:276] 0 containers: []
	W0918 13:25:22.216582    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:22.216653    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:22.227351    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:22.227369    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:22.227374    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:22.240127    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:22.240154    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:22.256553    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:22.256562    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:22.268602    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:22.268613    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:22.282258    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:22.282271    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:22.319873    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:22.319884    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:22.333871    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:22.333882    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:22.359208    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:22.359219    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:22.365883    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:22.365898    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:22.378308    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:22.378323    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:22.398570    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:22.398586    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:22.413525    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:22.413540    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:22.431190    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:22.431200    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:22.468919    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:22.468929    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:22.487622    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:22.487632    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:22.499065    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:22.499076    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:22.512773    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:22.512790    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:25.032326    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:27.784460    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:30.034439    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:30.034826    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:30.063433    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:30.063588    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:30.086485    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:30.086581    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:30.099656    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:30.099747    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:30.110954    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:30.111036    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:30.126135    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:30.126224    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:30.136904    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:30.136975    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:30.150195    3941 logs.go:276] 0 containers: []
	W0918 13:25:30.150208    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:30.150281    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:30.161279    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:30.161300    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:30.161305    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:30.178091    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:30.178101    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:30.192813    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:30.192822    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:30.197353    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:30.197361    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:30.212237    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:30.212250    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:30.224374    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:30.224385    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:30.240842    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:30.240852    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:30.275448    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:30.275465    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:30.291359    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:30.291371    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:30.328584    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:30.328593    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:30.340596    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:30.340607    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:30.352614    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:30.352627    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:30.364946    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:30.364957    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:30.383208    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:30.383222    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:30.394472    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:30.394485    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:30.405775    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:30.405790    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:30.430775    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:30.430783    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:32.786709    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:32.787042    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:32.811702    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:32.811847    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:32.831704    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:32.831819    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:32.844201    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:32.844287    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:32.855725    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:32.855812    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:32.865956    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:32.866034    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:32.876640    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:32.876729    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:32.887982    3992 logs.go:276] 0 containers: []
	W0918 13:25:32.887997    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:32.888071    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:32.898596    3992 logs.go:276] 0 containers: []
	W0918 13:25:32.898609    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:32.898616    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:32.898621    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:32.911970    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:32.911980    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:32.947411    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:32.947420    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:32.961813    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:32.961824    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:32.979060    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:32.979070    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:33.017445    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:33.017455    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:33.021752    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:33.021757    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:33.040580    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:33.040589    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:33.052243    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:33.052255    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:33.066336    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:33.066345    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:33.090494    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:33.090505    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:33.107188    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:33.107197    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:33.119854    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:33.119868    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:33.130804    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:33.130816    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:33.142910    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:33.142922    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:35.668394    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:32.945558    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:40.671000    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:40.671499    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:40.711977    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:40.712142    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:40.730823    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:40.730931    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:40.744319    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:40.744417    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:40.758200    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:40.758291    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:40.770025    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:40.770113    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:40.780849    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:40.780941    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:40.790814    3992 logs.go:276] 0 containers: []
	W0918 13:25:40.790830    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:40.790904    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:40.801623    3992 logs.go:276] 0 containers: []
	W0918 13:25:40.801635    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:40.801645    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:40.801651    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:40.838412    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:40.838420    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:40.849966    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:40.849975    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:40.868442    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:40.868453    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:40.881327    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:40.881337    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:40.918617    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:40.918632    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:40.931649    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:40.931667    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:40.945208    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:40.945218    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:40.969760    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:40.969770    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:40.995576    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:40.995587    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:41.009502    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:41.009513    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:41.013980    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:41.013987    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:41.030530    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:41.030544    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:41.044753    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:41.044764    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:41.057487    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:41.057499    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:37.947669    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:37.947998    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:37.974137    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:37.974297    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:37.991421    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:37.991525    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:38.004992    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:38.005082    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:38.016641    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:38.016727    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:38.030629    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:38.030718    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:38.041316    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:38.041396    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:38.051727    3941 logs.go:276] 0 containers: []
	W0918 13:25:38.051743    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:38.051819    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:38.062305    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:38.062324    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:38.062329    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:38.085572    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:38.085581    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:38.122132    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:38.122142    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:38.162338    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:38.162354    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:38.175621    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:38.175638    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:38.190525    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:38.190536    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:38.204540    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:38.204552    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:38.216171    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:38.216182    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:38.221043    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:38.221051    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:38.235411    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:38.235422    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:38.247469    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:38.247480    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:38.261771    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:38.261783    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:38.279752    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:38.279762    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:38.291468    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:38.291477    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:38.309320    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:38.309329    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:38.320696    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:38.320707    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:38.332744    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:38.332760    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:40.846931    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:43.571426    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:45.848971    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:45.849105    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:45.860644    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:45.860740    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:45.871921    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:45.872003    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:45.882438    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:45.882523    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:45.892864    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:45.892945    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:45.903416    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:45.903493    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:45.914373    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:45.914459    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:45.924997    3941 logs.go:276] 0 containers: []
	W0918 13:25:45.925016    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:45.925099    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:45.935690    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:45.935711    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:45.935717    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:45.940784    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:45.940791    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:45.976134    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:45.976150    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:45.990855    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:45.990866    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:46.005458    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:46.005469    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:46.020052    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:46.020062    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:46.031744    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:46.031756    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:46.043886    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:46.043897    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:46.056126    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:46.056139    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:46.067601    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:46.067610    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:46.106627    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:46.106637    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:46.123702    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:46.123713    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:46.135739    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:46.135751    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:46.158928    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:46.158937    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:46.174079    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:46.174090    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:46.189020    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:46.189031    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:46.200948    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:46.200959    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:48.573791    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:48.574080    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:48.598060    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:48.598184    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:48.614896    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:48.614992    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:48.627493    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:48.627568    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:48.640583    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:48.640658    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:48.651508    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:48.651574    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:48.662030    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:48.662112    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:48.672253    3992 logs.go:276] 0 containers: []
	W0918 13:25:48.672266    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:48.672328    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:48.683017    3992 logs.go:276] 0 containers: []
	W0918 13:25:48.683027    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:48.683036    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:48.683046    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:48.695001    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:48.695009    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:48.709806    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:48.709818    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:48.747962    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:48.747974    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:48.762325    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:48.762338    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:48.788165    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:48.788179    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:48.802700    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:48.802716    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:48.821394    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:48.821405    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:48.845368    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:48.845380    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:48.857576    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:48.857587    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:48.862162    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:48.862169    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:48.874779    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:48.874792    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:48.909936    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:48.909950    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:48.929086    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:48.929096    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:48.940599    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:48.940611    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:48.714705    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:51.455386    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:53.716744    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:53.716869    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:53.728606    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:25:53.728698    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:53.739501    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:25:53.739592    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:53.751894    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:25:53.751967    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:53.770522    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:25:53.770596    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:53.781343    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:25:53.781413    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:53.793893    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:25:53.793984    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:53.804864    3941 logs.go:276] 0 containers: []
	W0918 13:25:53.804876    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:53.804942    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:53.819159    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:25:53.819179    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:53.819185    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:53.853731    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:25:53.853746    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:25:53.869454    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:25:53.869465    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:25:53.883818    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:25:53.883833    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:25:53.896417    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:25:53.896428    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:25:53.909853    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:25:53.909863    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:25:53.921267    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:25:53.921282    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:53.933782    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:25:53.933796    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:25:53.946043    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:53.946052    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:53.969301    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:53.969311    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:53.973442    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:25:53.973449    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:25:53.994725    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:25:53.994739    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:25:54.006012    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:25:54.006023    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:25:54.020874    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:54.020888    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:54.056685    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:25:54.056693    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:25:54.067689    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:25:54.067699    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:25:54.084724    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:25:54.084737    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:25:56.605537    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:56.457547    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:56.457737    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:56.472729    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:56.472832    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:56.484519    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:56.484593    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:56.495637    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:56.495722    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:56.506181    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:56.506269    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:56.516819    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:56.516898    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:56.527563    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:56.527639    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:56.537619    3992 logs.go:276] 0 containers: []
	W0918 13:25:56.537631    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:56.537699    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:56.548391    3992 logs.go:276] 0 containers: []
	W0918 13:25:56.548401    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:56.548410    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:56.548416    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:56.553105    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:56.553115    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:56.570354    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:56.570365    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:56.586337    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:56.586346    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:56.600079    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:56.600089    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:56.612287    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:56.612297    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:56.623920    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:56.623934    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:56.636291    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:56.636302    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:56.661511    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:56.661522    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:56.699326    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:56.699337    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:56.729088    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:56.729103    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:56.743327    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:56.743337    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:56.756019    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:56.756032    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:56.794591    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:56.794599    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:56.811644    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:56.811655    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:59.325118    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:01.606437    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:01.606798    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:01.633954    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:01.634108    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:01.651790    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:01.651888    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:01.671072    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:01.671161    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:01.681974    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:01.682061    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:01.695663    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:01.695747    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:01.707733    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:01.707818    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:01.719215    3941 logs.go:276] 0 containers: []
	W0918 13:26:01.719227    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:01.719303    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:01.729904    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:01.729924    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:01.729930    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:01.742845    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:01.742858    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:01.755819    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:01.755830    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:01.771903    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:01.771914    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:01.784132    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:01.784146    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:01.795597    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:01.795607    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:01.831264    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:01.831280    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:01.846273    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:01.846286    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:01.862088    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:01.862099    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:01.873670    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:01.873681    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:01.899424    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:01.899435    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:01.904178    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:01.904186    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:01.916207    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:01.916218    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:01.931193    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:01.931203    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:01.949237    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:01.949249    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:01.987348    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:01.987357    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:02.008886    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:02.008896    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:04.327388    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:04.327646    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:04.352297    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:04.352426    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:04.368476    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:04.368567    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:04.381081    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:04.381169    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:04.392592    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:04.392675    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:04.407498    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:04.407582    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:04.422000    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:04.422088    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:04.431879    3992 logs.go:276] 0 containers: []
	W0918 13:26:04.431890    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:04.431954    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:04.442233    3992 logs.go:276] 0 containers: []
	W0918 13:26:04.442243    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:04.442251    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:04.442256    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:04.455225    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:04.455240    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:04.479066    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:04.479074    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:04.504227    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:04.504239    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:04.508933    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:04.508943    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:04.543134    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:04.543145    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:04.559398    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:04.559411    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:04.573931    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:04.573949    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:04.591762    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:04.591774    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:04.630522    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:04.630531    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:04.641709    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:04.641720    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:04.656945    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:04.656959    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:04.674257    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:04.674268    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:04.687227    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:04.687238    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:04.701788    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:04.701800    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:04.522655    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:07.215131    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:09.524882    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:09.525391    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:09.569529    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:09.569701    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:09.594985    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:09.595116    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:09.609818    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:09.609911    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:09.622097    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:09.622180    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:09.633706    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:09.633791    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:09.644779    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:09.644853    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:09.657016    3941 logs.go:276] 0 containers: []
	W0918 13:26:09.657027    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:09.657097    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:09.668251    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:09.668271    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:09.668277    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:09.703687    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:09.703700    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:09.724878    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:09.724888    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:09.736214    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:09.736226    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:09.758038    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:09.758049    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:09.776708    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:09.776720    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:09.816956    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:09.816968    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:09.830404    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:09.830414    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:09.841519    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:09.841533    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:09.853611    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:09.853625    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:09.857773    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:09.857781    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:09.872284    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:09.872295    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:09.884322    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:09.884332    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:09.908310    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:09.908318    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:09.931826    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:09.931837    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:09.943228    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:09.943239    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:09.955684    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:09.955694    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:12.217377    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:12.217655    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:12.239438    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:12.239579    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:12.254251    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:12.254343    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:12.267274    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:12.267363    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:12.278236    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:12.278327    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:12.289518    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:12.289602    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:12.300759    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:12.300842    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:12.311226    3992 logs.go:276] 0 containers: []
	W0918 13:26:12.311237    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:12.311310    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:12.321292    3992 logs.go:276] 0 containers: []
	W0918 13:26:12.321306    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:12.321315    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:12.321321    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:12.333702    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:12.333715    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:12.345938    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:12.345953    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:12.357505    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:12.357517    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:12.361964    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:12.361970    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:12.377289    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:12.377300    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:12.391648    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:12.391661    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:12.404525    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:12.404540    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:12.418203    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:12.418217    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:12.456924    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:12.456935    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:12.491155    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:12.491170    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:12.509737    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:12.509749    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:12.527096    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:12.527106    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:12.550571    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:12.550579    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:12.575556    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:12.575566    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:15.089531    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:12.472679    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:20.091835    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:20.092001    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:20.114389    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:20.114497    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:20.127218    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:20.127307    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:20.138326    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:20.138399    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:20.148832    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:20.148921    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:20.166779    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:20.166858    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:20.177299    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:20.177372    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:20.187341    3992 logs.go:276] 0 containers: []
	W0918 13:26:20.187353    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:20.187427    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:20.197591    3992 logs.go:276] 0 containers: []
	W0918 13:26:20.197602    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:20.197608    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:20.197614    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:20.202261    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:20.202271    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:20.216864    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:20.216874    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:20.240525    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:20.240533    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:20.279545    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:20.279556    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:20.313820    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:20.313831    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:20.331082    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:20.331094    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:20.344282    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:20.344294    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:20.358386    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:20.358396    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:20.384327    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:20.384336    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:20.398616    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:20.398626    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:20.410348    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:20.410358    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:20.425647    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:20.425656    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:20.438735    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:20.438745    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:20.450170    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:20.450182    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:17.474839    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:17.475501    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:17.518747    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:17.518917    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:17.539477    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:17.539601    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:17.554436    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:17.554528    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:17.567259    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:17.567351    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:17.578603    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:17.578681    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:17.589463    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:17.589552    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:17.600117    3941 logs.go:276] 0 containers: []
	W0918 13:26:17.600131    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:17.600214    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:17.613395    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:17.613439    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:17.613449    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:17.649522    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:17.649534    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:17.665868    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:17.665881    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:17.677112    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:17.677126    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:17.688336    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:17.688346    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:17.701602    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:17.701614    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:17.715602    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:17.715616    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:17.727257    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:17.727272    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:17.739291    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:17.739305    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:17.755965    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:17.755979    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:17.773460    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:17.773475    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:17.784713    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:17.784728    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:17.819352    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:17.819368    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:17.847410    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:17.847425    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:17.870217    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:17.870226    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:17.908325    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:17.908334    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:17.912477    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:17.912483    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:20.424636    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:22.964162    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:25.426499    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:25.426772    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:25.448000    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:25.448141    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:25.462404    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:25.462500    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:25.474825    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:25.474909    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:25.485712    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:25.485795    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:25.497623    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:25.497706    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:25.507874    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:25.507957    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:25.518245    3941 logs.go:276] 0 containers: []
	W0918 13:26:25.518256    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:25.518332    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:25.528845    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:25.528865    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:25.528873    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:25.533117    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:25.533125    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:25.567433    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:25.567444    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:25.581599    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:25.581615    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:25.593511    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:25.593522    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:25.605307    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:25.605318    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:25.616644    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:25.616655    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:25.629776    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:25.629786    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:25.643987    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:25.643997    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:25.656436    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:25.656447    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:25.668801    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:25.668813    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:25.680270    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:25.680281    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:25.694174    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:25.694184    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:25.708933    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:25.708945    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:25.726966    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:25.726977    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:25.738646    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:25.738658    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:25.775855    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:25.775864    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:27.966560    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:27.966707    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:27.983795    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:27.983894    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:27.998780    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:27.998867    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:28.010903    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:28.010975    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:28.021317    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:28.021400    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:28.033449    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:28.033529    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:28.044210    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:28.044291    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:28.054954    3992 logs.go:276] 0 containers: []
	W0918 13:26:28.054966    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:28.055036    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:28.065433    3992 logs.go:276] 0 containers: []
	W0918 13:26:28.065448    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:28.065456    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:28.065463    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:28.104695    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:28.104705    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:28.109370    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:28.109379    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:28.123429    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:28.123443    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:28.135038    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:28.135050    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:28.148102    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:28.148115    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:28.162671    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:28.162682    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:28.196763    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:28.196774    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:28.211683    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:28.211693    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:28.223061    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:28.223072    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:28.234529    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:28.234538    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:28.260352    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:28.260368    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:28.272525    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:28.272538    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:28.290199    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:28.290213    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:28.314438    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:28.314446    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:30.831077    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:28.299436    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:35.833396    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:35.833890    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:35.867851    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:35.868009    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:35.886838    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:35.886944    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:35.900248    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:35.900353    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:35.911241    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:35.911329    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:35.921780    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:35.921857    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:35.933127    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:35.933209    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:35.943778    3992 logs.go:276] 0 containers: []
	W0918 13:26:35.943793    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:35.943861    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:35.956687    3992 logs.go:276] 0 containers: []
	W0918 13:26:35.956700    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:35.956708    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:35.956714    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:35.970756    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:35.970767    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:35.995818    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:35.995827    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:36.012043    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:36.012057    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:36.024620    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:36.024631    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:36.047705    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:36.047716    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:36.084150    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:36.084161    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:36.095237    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:36.095249    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:36.108745    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:36.108754    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:36.131481    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:36.131489    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:36.149367    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:36.149379    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:36.163875    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:36.163889    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:36.175305    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:36.175320    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:36.179328    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:36.179333    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:36.191863    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:36.191872    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:33.301588    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:33.302076    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:33.338259    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:33.338421    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:33.359343    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:33.359461    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:33.374010    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:33.374107    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:33.386076    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:33.386162    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:33.397219    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:33.397296    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:33.409804    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:33.409895    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:33.420056    3941 logs.go:276] 0 containers: []
	W0918 13:26:33.420070    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:33.420153    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:33.431386    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:33.431406    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:33.431411    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:33.443791    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:33.443800    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:33.448667    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:33.448673    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:33.463721    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:33.463730    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:33.476812    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:33.476821    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:33.491160    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:33.491170    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:33.506449    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:33.506461    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:33.528650    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:33.528657    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:33.564676    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:33.564687    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:33.578613    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:33.578623    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:33.591152    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:33.591164    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:33.603854    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:33.603865    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:33.618707    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:33.618717    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:33.630342    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:33.630352    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:33.641340    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:33.641351    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:33.656764    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:33.656775    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:33.692796    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:33.692807    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:36.212557    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:38.727784    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:41.213334    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:41.213623    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:41.240063    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:41.240219    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:41.256890    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:41.256987    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:41.269537    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:41.269629    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:41.285476    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:41.285564    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:41.296103    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:41.296191    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:41.306701    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:41.306789    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:41.316906    3941 logs.go:276] 0 containers: []
	W0918 13:26:41.316917    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:41.316989    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:41.327557    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:41.327573    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:41.327578    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:41.362279    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:41.362291    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:41.377678    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:41.377688    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:41.388966    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:41.388977    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:41.400283    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:41.400297    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:41.439552    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:41.439567    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:41.452010    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:41.452023    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:41.466129    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:41.466138    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:41.477292    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:41.477307    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:41.488900    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:41.488912    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:41.493417    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:41.493424    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:41.507702    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:41.507712    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:41.524643    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:41.524656    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:41.547175    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:41.547184    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:41.559444    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:41.559453    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:41.573259    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:41.573273    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:41.585483    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:41.585493    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:43.729967    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:43.730323    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:43.754726    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:43.754854    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:43.773460    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:43.773553    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:43.785865    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:43.785958    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:43.796537    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:43.796621    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:43.806856    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:43.806931    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:43.817542    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:43.817626    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:43.827911    3992 logs.go:276] 0 containers: []
	W0918 13:26:43.827925    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:43.827989    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:43.837878    3992 logs.go:276] 0 containers: []
	W0918 13:26:43.837892    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:43.837901    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:43.837907    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:43.841973    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:43.841982    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:43.856299    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:43.856309    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:43.880658    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:43.880667    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:43.919126    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:43.919134    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:43.960729    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:43.960742    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:43.974725    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:43.974736    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:43.998751    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:43.998768    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:44.021035    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:44.021049    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:44.035316    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:44.035328    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:44.047623    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:44.047635    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:44.072607    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:44.072618    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:44.090966    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:44.090978    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:44.102942    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:44.102953    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:44.119037    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:44.119048    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:44.099067    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:46.634108    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:49.101248    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:49.101577    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:49.125529    3941 logs.go:276] 2 containers: [de4406659d78 6b1ac7de9044]
	I0918 13:26:49.125677    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:49.140823    3941 logs.go:276] 2 containers: [a22b43545a9c b055a7066d86]
	I0918 13:26:49.140916    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:49.157466    3941 logs.go:276] 1 containers: [7f53a20144c2]
	I0918 13:26:49.157560    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:49.181342    3941 logs.go:276] 2 containers: [eb85514eb999 cb6295d7aef9]
	I0918 13:26:49.181435    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:49.194571    3941 logs.go:276] 1 containers: [e2913184e0a4]
	I0918 13:26:49.194647    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:49.205454    3941 logs.go:276] 2 containers: [c3364755f696 48327b73c9bd]
	I0918 13:26:49.205537    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:49.215546    3941 logs.go:276] 0 containers: []
	W0918 13:26:49.215558    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:49.215630    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:49.226381    3941 logs.go:276] 2 containers: [082eb91c293a 09c35ba6beb9]
	I0918 13:26:49.226399    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:49.226404    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:49.230821    3941 logs.go:123] Gathering logs for storage-provisioner [09c35ba6beb9] ...
	I0918 13:26:49.230827    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09c35ba6beb9"
	I0918 13:26:49.241725    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:26:49.241737    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:49.253598    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:49.253611    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:49.289662    3941 logs.go:123] Gathering logs for kube-apiserver [6b1ac7de9044] ...
	I0918 13:26:49.289670    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b1ac7de9044"
	I0918 13:26:49.302049    3941 logs.go:123] Gathering logs for etcd [a22b43545a9c] ...
	I0918 13:26:49.302062    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a22b43545a9c"
	I0918 13:26:49.316524    3941 logs.go:123] Gathering logs for kube-controller-manager [c3364755f696] ...
	I0918 13:26:49.316539    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3364755f696"
	I0918 13:26:49.333432    3941 logs.go:123] Gathering logs for storage-provisioner [082eb91c293a] ...
	I0918 13:26:49.333445    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 082eb91c293a"
	I0918 13:26:49.345114    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:49.345129    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:49.368574    3941 logs.go:123] Gathering logs for kube-apiserver [de4406659d78] ...
	I0918 13:26:49.368581    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4406659d78"
	I0918 13:26:49.386698    3941 logs.go:123] Gathering logs for etcd [b055a7066d86] ...
	I0918 13:26:49.386709    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b055a7066d86"
	I0918 13:26:49.401648    3941 logs.go:123] Gathering logs for kube-proxy [e2913184e0a4] ...
	I0918 13:26:49.401660    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2913184e0a4"
	I0918 13:26:49.413111    3941 logs.go:123] Gathering logs for kube-controller-manager [48327b73c9bd] ...
	I0918 13:26:49.413123    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48327b73c9bd"
	I0918 13:26:49.424582    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:49.424595    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:49.458563    3941 logs.go:123] Gathering logs for kube-scheduler [eb85514eb999] ...
	I0918 13:26:49.458576    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb85514eb999"
	I0918 13:26:49.470210    3941 logs.go:123] Gathering logs for kube-scheduler [cb6295d7aef9] ...
	I0918 13:26:49.470221    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb6295d7aef9"
	I0918 13:26:49.485201    3941 logs.go:123] Gathering logs for coredns [7f53a20144c2] ...
	I0918 13:26:49.485212    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f53a20144c2"
	I0918 13:26:51.996768    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:51.635772    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:51.636315    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:51.675456    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:51.675624    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:51.701562    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:51.701695    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:51.716187    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:51.716286    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:51.727977    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:51.728062    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:51.738682    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:51.738760    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:51.750929    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:51.751014    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:51.761422    3992 logs.go:276] 0 containers: []
	W0918 13:26:51.761433    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:51.761506    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:51.772602    3992 logs.go:276] 0 containers: []
	W0918 13:26:51.772618    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:51.772626    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:51.772631    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:51.811625    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:51.811633    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:51.823167    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:51.823179    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:51.835437    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:51.835449    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:51.859313    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:51.859321    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:51.894014    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:51.894024    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:51.908012    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:51.908023    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:51.920528    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:51.920539    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:51.934884    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:51.934901    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:51.968426    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:51.968440    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:51.980347    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:51.980356    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:51.998389    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:51.998397    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:52.009703    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:52.009713    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:52.013810    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:52.013819    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:52.030671    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:52.030684    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:54.546627    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:56.998873    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:56.998973    3941 kubeadm.go:597] duration metric: took 4m3.754986625s to restartPrimaryControlPlane
	W0918 13:26:56.999040    3941 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 13:26:56.999073    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0918 13:26:57.977863    3941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 13:26:57.983113    3941 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 13:26:57.986059    3941 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 13:26:57.988794    3941 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 13:26:57.988801    3941 kubeadm.go:157] found existing configuration files:
	
	I0918 13:26:57.988827    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/admin.conf
	I0918 13:26:57.991904    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 13:26:57.991935    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 13:26:57.995312    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/kubelet.conf
	I0918 13:26:57.998024    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 13:26:57.998052    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 13:26:58.000929    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/controller-manager.conf
	I0918 13:26:58.003794    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 13:26:58.003816    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 13:26:58.007206    3941 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/scheduler.conf
	I0918 13:26:58.009890    3941 kubeadm.go:163] "https://control-plane.minikube.internal:50252" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50252 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 13:26:58.009918    3941 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 13:26:58.012528    3941 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 13:26:58.030142    3941 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0918 13:26:58.030335    3941 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 13:26:58.075461    3941 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 13:26:58.075526    3941 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 13:26:58.075576    3941 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 13:26:58.125234    3941 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 13:26:58.129390    3941 out.go:235]   - Generating certificates and keys ...
	I0918 13:26:58.129521    3941 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 13:26:58.129668    3941 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 13:26:58.129714    3941 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 13:26:58.129769    3941 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 13:26:58.129839    3941 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 13:26:58.129878    3941 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 13:26:58.130007    3941 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 13:26:58.130098    3941 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 13:26:58.130206    3941 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 13:26:58.130303    3941 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 13:26:58.130357    3941 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 13:26:58.130446    3941 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 13:26:58.289522    3941 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 13:26:58.360452    3941 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 13:26:58.465958    3941 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 13:26:58.512575    3941 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 13:26:58.540158    3941 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 13:26:58.540539    3941 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 13:26:58.540589    3941 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 13:26:58.632066    3941 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 13:26:59.548884    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:59.548994    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:59.567234    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:59.567326    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:59.578854    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:59.578948    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:59.589566    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:59.589651    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:59.601542    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:59.601626    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:59.612922    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:59.613008    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:59.624667    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:59.624747    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:59.636301    3992 logs.go:276] 0 containers: []
	W0918 13:26:59.636357    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:59.636438    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:59.647739    3992 logs.go:276] 0 containers: []
	W0918 13:26:59.647752    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:59.647760    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:59.647766    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:59.675446    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:59.675460    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:59.687749    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:59.687760    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:59.724327    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:59.724342    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:59.736374    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:59.736388    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:59.748465    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:59.748477    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:59.760819    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:59.760834    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:59.772665    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:59.772678    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:59.812423    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:59.812436    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:59.835122    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:59.835137    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:59.849536    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:59.849549    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:59.865767    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:59.865783    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:59.889031    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:59.889043    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:59.903495    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:59.903511    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:59.927910    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:59.927924    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:58.635215    3941 out.go:235]   - Booting up control plane ...
	I0918 13:26:58.635259    3941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 13:26:58.635294    3941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 13:26:58.635329    3941 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 13:26:58.635379    3941 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 13:26:58.635463    3941 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 13:27:03.641998    3941 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.006945 seconds
	I0918 13:27:03.642266    3941 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 13:27:03.657488    3941 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 13:27:04.174112    3941 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 13:27:04.174224    3941 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-314000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 13:27:04.683307    3941 kubeadm.go:310] [bootstrap-token] Using token: 8lhv3k.f2rxbxynoqw4hg0y
	I0918 13:27:04.689452    3941 out.go:235]   - Configuring RBAC rules ...
	I0918 13:27:04.689506    3941 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 13:27:04.689558    3941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 13:27:04.696078    3941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 13:27:04.696941    3941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 13:27:04.697863    3941 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 13:27:04.698624    3941 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 13:27:04.702058    3941 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 13:27:04.885389    3941 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 13:27:05.087245    3941 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 13:27:05.087557    3941 kubeadm.go:310] 
	I0918 13:27:05.087631    3941 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 13:27:05.087638    3941 kubeadm.go:310] 
	I0918 13:27:05.087756    3941 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 13:27:05.087791    3941 kubeadm.go:310] 
	I0918 13:27:05.087842    3941 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 13:27:05.087877    3941 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 13:27:05.087916    3941 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 13:27:05.087922    3941 kubeadm.go:310] 
	I0918 13:27:05.087947    3941 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 13:27:05.087949    3941 kubeadm.go:310] 
	I0918 13:27:05.087970    3941 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 13:27:05.087972    3941 kubeadm.go:310] 
	I0918 13:27:05.087994    3941 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 13:27:05.088032    3941 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 13:27:05.088071    3941 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 13:27:05.088075    3941 kubeadm.go:310] 
	I0918 13:27:05.088119    3941 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 13:27:05.088158    3941 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 13:27:05.088160    3941 kubeadm.go:310] 
	I0918 13:27:05.088195    3941 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8lhv3k.f2rxbxynoqw4hg0y \
	I0918 13:27:05.088240    3941 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:491fed232b633ec8404b91d551b715c799429ab9f4658c5350f7586533e73a75 \
	I0918 13:27:05.088254    3941 kubeadm.go:310] 	--control-plane 
	I0918 13:27:05.088256    3941 kubeadm.go:310] 
	I0918 13:27:05.088295    3941 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 13:27:05.088301    3941 kubeadm.go:310] 
	I0918 13:27:05.088342    3941 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8lhv3k.f2rxbxynoqw4hg0y \
	I0918 13:27:05.088403    3941 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:491fed232b633ec8404b91d551b715c799429ab9f4658c5350f7586533e73a75 
	I0918 13:27:05.088462    3941 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 13:27:05.088470    3941 cni.go:84] Creating CNI manager for ""
	I0918 13:27:05.088478    3941 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:27:05.093200    3941 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 13:27:05.101148    3941 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 13:27:05.104223    3941 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 13:27:05.109374    3941 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 13:27:05.109434    3941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 13:27:05.109454    3941 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-314000 minikube.k8s.io/updated_at=2024_09_18T13_27_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=running-upgrade-314000 minikube.k8s.io/primary=true
	I0918 13:27:05.113528    3941 ops.go:34] apiserver oom_adj: -16
	I0918 13:27:05.156078    3941 kubeadm.go:1113] duration metric: took 46.696333ms to wait for elevateKubeSystemPrivileges
	I0918 13:27:05.156168    3941 kubeadm.go:394] duration metric: took 4m11.926200792s to StartCluster
	I0918 13:27:05.156182    3941 settings.go:142] acquiring lock: {Name:mkbb043d0459391a7d922bd686e90e22968feef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:27:05.156272    3941 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:27:05.156641    3941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/kubeconfig: {Name:mkc39e19086c32e3258f75506afcbcc582926b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:27:05.156828    3941 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:27:05.156850    3941 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 13:27:05.156884    3941 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-314000"
	I0918 13:27:05.156892    3941 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-314000"
	W0918 13:27:05.156897    3941 addons.go:243] addon storage-provisioner should already be in state true
	I0918 13:27:05.156910    3941 host.go:66] Checking if "running-upgrade-314000" exists ...
	I0918 13:27:05.156934    3941 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-314000"
	I0918 13:27:05.156938    3941 config.go:182] Loaded profile config "running-upgrade-314000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0918 13:27:05.156999    3941 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-314000"
	I0918 13:27:05.161136    3941 out.go:177] * Verifying Kubernetes components...
	I0918 13:27:05.161755    3941 kapi.go:59] client config for running-upgrade-314000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/running-upgrade-314000/client.key", CAFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105df9800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 13:27:05.165470    3941 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-314000"
	W0918 13:27:05.165475    3941 addons.go:243] addon default-storageclass should already be in state true
	I0918 13:27:05.165484    3941 host.go:66] Checking if "running-upgrade-314000" exists ...
	I0918 13:27:05.166020    3941 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 13:27:05.166025    3941 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 13:27:05.166031    3941 sshutil.go:53] new ssh client: &{IP:localhost Port:50220 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/running-upgrade-314000/id_rsa Username:docker}
	I0918 13:27:05.169095    3941 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:27:02.433337    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:05.173227    3941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:27:05.177194    3941 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 13:27:05.177201    3941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 13:27:05.177208    3941 sshutil.go:53] new ssh client: &{IP:localhost Port:50220 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/running-upgrade-314000/id_rsa Username:docker}
	I0918 13:27:05.268116    3941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 13:27:05.273425    3941 api_server.go:52] waiting for apiserver process to appear ...
	I0918 13:27:05.273475    3941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:27:05.277443    3941 api_server.go:72] duration metric: took 120.60575ms to wait for apiserver process to appear ...
	I0918 13:27:05.277452    3941 api_server.go:88] waiting for apiserver healthz status ...
	I0918 13:27:05.277458    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:05.291367    3941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 13:27:05.304145    3941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 13:27:05.626580    3941 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0918 13:27:05.626594    3941 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0918 13:27:07.435519    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:07.435727    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:27:07.450581    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:27:07.450684    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:27:07.462412    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:27:07.462487    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:27:07.472872    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:27:07.472961    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:27:07.483103    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:27:07.483192    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:27:07.493462    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:27:07.493539    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:27:07.507651    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:27:07.507744    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:27:07.518864    3992 logs.go:276] 0 containers: []
	W0918 13:27:07.518876    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:27:07.518951    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:27:07.529306    3992 logs.go:276] 0 containers: []
	W0918 13:27:07.529322    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:27:07.529331    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:27:07.529336    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:27:07.567462    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:27:07.567470    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:27:07.580040    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:27:07.580052    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:27:07.615956    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:27:07.615971    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:27:07.629585    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:27:07.629600    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:27:07.641270    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:27:07.641281    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:27:07.659016    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:27:07.659032    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:27:07.672833    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:27:07.672847    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:27:07.677318    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:27:07.677328    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:27:07.703875    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:27:07.703913    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:27:07.718679    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:27:07.718690    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:27:07.730390    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:27:07.730400    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:27:07.744338    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:27:07.744348    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:27:07.755799    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:27:07.755811    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:27:07.767515    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:27:07.767531    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:27:10.290506    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:10.279397    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:10.279430    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:15.292521    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:15.292608    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:27:15.304068    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:27:15.304155    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:27:15.314676    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:27:15.314755    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:27:15.324939    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:27:15.325016    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:27:15.335557    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:27:15.335648    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:27:15.346171    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:27:15.346253    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:27:15.356727    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:27:15.356815    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:27:15.367158    3992 logs.go:276] 0 containers: []
	W0918 13:27:15.367169    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:27:15.367232    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:27:15.377436    3992 logs.go:276] 0 containers: []
	W0918 13:27:15.377446    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:27:15.377456    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:27:15.377462    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:27:15.389215    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:27:15.389227    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:27:15.407049    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:27:15.407058    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:27:15.429214    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:27:15.429222    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:27:15.467521    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:27:15.467530    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:27:15.481372    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:27:15.481382    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:27:15.507920    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:27:15.507939    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:27:15.522627    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:27:15.522638    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:27:15.535812    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:27:15.535823    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:27:15.548954    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:27:15.548966    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:27:15.563541    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:27:15.563550    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:27:15.576716    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:27:15.576728    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:27:15.589606    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:27:15.589617    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:27:15.593707    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:27:15.593714    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:27:15.630430    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:27:15.630445    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:27:15.280086    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:15.280113    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:18.148439    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:20.280420    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:20.280461    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:23.150531    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:23.150621    3992 kubeadm.go:597] duration metric: took 4m3.047120084s to restartPrimaryControlPlane
	W0918 13:27:23.150680    3992 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 13:27:23.150711    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0918 13:27:24.107814    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 13:27:24.112834    3992 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 13:27:24.115757    3992 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 13:27:24.118552    3992 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 13:27:24.118559    3992 kubeadm.go:157] found existing configuration files:
	
	I0918 13:27:24.118590    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/admin.conf
	I0918 13:27:24.121032    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 13:27:24.121062    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 13:27:24.124011    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/kubelet.conf
	I0918 13:27:24.127117    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 13:27:24.127146    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 13:27:24.129830    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/controller-manager.conf
	I0918 13:27:24.132424    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 13:27:24.132447    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 13:27:24.135573    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/scheduler.conf
	I0918 13:27:24.138551    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 13:27:24.138591    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 13:27:24.141282    3992 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 13:27:24.156954    3992 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0918 13:27:24.157011    3992 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 13:27:24.214289    3992 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 13:27:24.214348    3992 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 13:27:24.214409    3992 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 13:27:24.264987    3992 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 13:27:24.269280    3992 out.go:235]   - Generating certificates and keys ...
	I0918 13:27:24.269317    3992 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 13:27:24.269352    3992 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 13:27:24.269413    3992 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 13:27:24.269464    3992 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 13:27:24.269557    3992 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 13:27:24.269617    3992 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 13:27:24.269649    3992 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 13:27:24.269686    3992 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 13:27:24.269728    3992 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 13:27:24.269772    3992 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 13:27:24.269794    3992 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 13:27:24.269823    3992 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 13:27:24.639180    3992 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 13:27:24.826786    3992 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 13:27:24.868300    3992 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 13:27:24.952020    3992 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 13:27:24.983414    3992 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 13:27:24.983843    3992 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 13:27:24.984014    3992 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 13:27:25.056088    3992 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 13:27:25.060241    3992 out.go:235]   - Booting up control plane ...
	I0918 13:27:25.060293    3992 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 13:27:25.060333    3992 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 13:27:25.060367    3992 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 13:27:25.067013    3992 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 13:27:25.067844    3992 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 13:27:25.280927    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:25.280964    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:29.570704    3992 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502489 seconds
	I0918 13:27:29.570765    3992 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 13:27:29.573954    3992 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 13:27:30.085041    3992 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 13:27:30.085209    3992 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-367000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 13:27:30.590751    3992 kubeadm.go:310] [bootstrap-token] Using token: bdspm0.fklw4sa7cic7hhpg
	I0918 13:27:30.596728    3992 out.go:235]   - Configuring RBAC rules ...
	I0918 13:27:30.596788    3992 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 13:27:30.596835    3992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 13:27:30.601165    3992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 13:27:30.602006    3992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 13:27:30.602922    3992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 13:27:30.603817    3992 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 13:27:30.606937    3992 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 13:27:30.783063    3992 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 13:27:30.994873    3992 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 13:27:30.995384    3992 kubeadm.go:310] 
	I0918 13:27:30.995482    3992 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 13:27:30.995488    3992 kubeadm.go:310] 
	I0918 13:27:30.995543    3992 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 13:27:30.995548    3992 kubeadm.go:310] 
	I0918 13:27:30.995561    3992 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 13:27:30.995595    3992 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 13:27:30.995625    3992 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 13:27:30.995628    3992 kubeadm.go:310] 
	I0918 13:27:30.995656    3992 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 13:27:30.995659    3992 kubeadm.go:310] 
	I0918 13:27:30.995682    3992 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 13:27:30.995685    3992 kubeadm.go:310] 
	I0918 13:27:30.995783    3992 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 13:27:30.995842    3992 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 13:27:30.995882    3992 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 13:27:30.995889    3992 kubeadm.go:310] 
	I0918 13:27:30.995925    3992 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 13:27:30.996074    3992 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 13:27:30.996084    3992 kubeadm.go:310] 
	I0918 13:27:30.996129    3992 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bdspm0.fklw4sa7cic7hhpg \
	I0918 13:27:30.996189    3992 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:491fed232b633ec8404b91d551b715c799429ab9f4658c5350f7586533e73a75 \
	I0918 13:27:30.996202    3992 kubeadm.go:310] 	--control-plane 
	I0918 13:27:30.996204    3992 kubeadm.go:310] 
	I0918 13:27:30.996251    3992 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 13:27:30.996253    3992 kubeadm.go:310] 
	I0918 13:27:30.996296    3992 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bdspm0.fklw4sa7cic7hhpg \
	I0918 13:27:30.996372    3992 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:491fed232b633ec8404b91d551b715c799429ab9f4658c5350f7586533e73a75 
	I0918 13:27:30.996431    3992 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 13:27:30.996438    3992 cni.go:84] Creating CNI manager for ""
	I0918 13:27:30.996445    3992 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:27:31.000983    3992 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 13:27:31.005972    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 13:27:31.029521    3992 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 13:27:31.035351    3992 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 13:27:31.035422    3992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 13:27:31.035497    3992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-367000 minikube.k8s.io/updated_at=2024_09_18T13_27_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=stopped-upgrade-367000 minikube.k8s.io/primary=true
	I0918 13:27:31.066961    3992 ops.go:34] apiserver oom_adj: -16
	I0918 13:27:31.066961    3992 kubeadm.go:1113] duration metric: took 31.599167ms to wait for elevateKubeSystemPrivileges
	I0918 13:27:31.082511    3992 kubeadm.go:394] duration metric: took 4m10.992796583s to StartCluster
	I0918 13:27:31.082530    3992 settings.go:142] acquiring lock: {Name:mkbb043d0459391a7d922bd686e90e22968feef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:27:31.082613    3992 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:27:31.083004    3992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/kubeconfig: {Name:mkc39e19086c32e3258f75506afcbcc582926b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:27:31.083187    3992 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:27:31.083280    3992 config.go:182] Loaded profile config "stopped-upgrade-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0918 13:27:31.083241    3992 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 13:27:31.083311    3992 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-367000"
	I0918 13:27:31.083322    3992 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-367000"
	W0918 13:27:31.083327    3992 addons.go:243] addon storage-provisioner should already be in state true
	I0918 13:27:31.083339    3992 host.go:66] Checking if "stopped-upgrade-367000" exists ...
	I0918 13:27:31.083353    3992 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-367000"
	I0918 13:27:31.083361    3992 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-367000"
	I0918 13:27:31.084410    3992 kapi.go:59] client config for stopped-upgrade-367000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.key", CAFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e05800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 13:27:31.084530    3992 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-367000"
	W0918 13:27:31.084537    3992 addons.go:243] addon default-storageclass should already be in state true
	I0918 13:27:31.084544    3992 host.go:66] Checking if "stopped-upgrade-367000" exists ...
	I0918 13:27:31.086873    3992 out.go:177] * Verifying Kubernetes components...
	I0918 13:27:31.087272    3992 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 13:27:31.091106    3992 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 13:27:31.091112    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	I0918 13:27:31.094906    3992 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:27:31.098961    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:27:31.102948    3992 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 13:27:31.102961    3992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 13:27:31.102969    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	I0918 13:27:31.170610    3992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 13:27:31.177063    3992 api_server.go:52] waiting for apiserver process to appear ...
	I0918 13:27:31.177128    3992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:27:31.181788    3992 api_server.go:72] duration metric: took 98.591917ms to wait for apiserver process to appear ...
	I0918 13:27:31.181797    3992 api_server.go:88] waiting for apiserver healthz status ...
	I0918 13:27:31.181805    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:31.187263    3992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 13:27:31.202955    3992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 13:27:30.281623    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:30.281679    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:35.282528    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:35.282578    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0918 13:27:35.628114    3941 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0918 13:27:35.636264    3941 out.go:177] * Enabled addons: storage-provisioner
	I0918 13:27:31.592439    3992 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0918 13:27:31.592451    3992 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0918 13:27:36.183294    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:36.183318    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:35.644228    3941 addons.go:510] duration metric: took 30.488177417s for enable addons: enabled=[storage-provisioner]
	I0918 13:27:41.183621    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:41.183642    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:40.283651    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:40.283691    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:46.183725    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:46.183772    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:45.285109    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:45.285147    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:51.183967    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:51.184003    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:50.287241    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:50.287281    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:56.184342    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:56.184369    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:55.289390    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:55.289410    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:01.184756    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:01.184776    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0918 13:28:01.593894    3992 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0918 13:28:01.603062    3992 out.go:177] * Enabled addons: storage-provisioner
	I0918 13:28:00.291037    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:00.291083    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:01.610058    3992 addons.go:510] duration metric: took 30.527649167s for enable addons: enabled=[storage-provisioner]
	I0918 13:28:06.185307    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:06.185351    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:05.292064    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:05.292192    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:05.305218    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:05.305303    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:05.316310    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:05.316396    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:05.326939    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:05.327023    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:05.337843    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:05.337926    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:05.348354    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:05.348452    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:05.359340    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:05.359431    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:05.369439    3941 logs.go:276] 0 containers: []
	W0918 13:28:05.369451    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:05.369523    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:05.379887    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:05.379904    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:05.379909    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:05.394495    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:05.394506    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:05.406372    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:05.406386    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:05.418683    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:05.418697    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:05.437179    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:05.437190    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:05.455798    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:05.455814    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:05.468229    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:05.468243    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:05.494173    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:05.494181    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:05.506111    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:05.506122    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:05.540968    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:05.540980    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:05.545333    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:05.545343    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:05.581880    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:05.581889    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:05.595752    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:05.595767    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:11.186187    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:11.186213    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:08.109541    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:16.187188    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:16.187235    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:13.111760    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:13.111869    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:13.123156    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:13.123246    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:13.133950    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:13.134038    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:13.144339    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:13.144415    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:13.155500    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:13.155587    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:13.166808    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:13.166893    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:13.177174    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:13.177254    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:13.188070    3941 logs.go:276] 0 containers: []
	W0918 13:28:13.188086    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:13.188158    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:13.198589    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:13.198608    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:13.198613    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:13.219014    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:13.219024    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:13.236381    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:13.236396    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:13.248996    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:13.249011    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:13.253582    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:13.253591    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:13.290517    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:13.290530    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:13.304360    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:13.304372    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:13.316943    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:13.316954    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:13.328898    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:13.328909    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:13.352354    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:13.352365    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:13.386905    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:13.386914    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:13.400947    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:13.400958    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:13.412911    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:13.412926    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:15.929703    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:21.187439    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:21.187464    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:20.931970    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:20.932161    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:20.950073    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:20.950183    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:20.964117    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:20.964212    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:20.978804    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:20.978883    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:20.989603    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:20.989671    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:21.000088    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:21.000173    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:21.010456    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:21.010535    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:21.021360    3941 logs.go:276] 0 containers: []
	W0918 13:28:21.021371    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:21.021436    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:21.038503    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:21.038518    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:21.038523    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:21.072361    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:21.072369    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:21.087362    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:21.087374    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:21.099407    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:21.099418    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:21.123369    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:21.123380    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:21.135606    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:21.135618    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:21.140707    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:21.140719    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:21.175370    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:21.175381    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:21.189844    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:21.189853    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:21.203537    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:21.203547    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:21.218092    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:21.218102    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:21.229705    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:21.229716    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:21.246911    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:21.246922    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:26.188778    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:26.188823    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:23.760425    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:31.190293    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:31.190463    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:31.202458    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:28:31.202541    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:31.213129    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:28:31.213212    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:31.223286    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:28:31.223357    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:31.234055    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:28:31.234133    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:31.245050    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:28:31.245129    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:31.255761    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:28:31.255832    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:31.266023    3992 logs.go:276] 0 containers: []
	W0918 13:28:31.266035    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:31.266104    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:31.276267    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:28:31.276291    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:31.276296    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:31.281100    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:28:31.281108    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:28:31.304755    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:28:31.304766    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:28:31.316400    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:28:31.316411    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:28:31.328236    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:28:31.328246    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:28:31.345770    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:28:31.345780    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:28:31.357295    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:28:31.357305    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:31.368455    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:31.368465    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:31.402288    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:28:31.402300    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:28:28.762614    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:28.762835    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:28.778878    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:28.778984    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:28.795903    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:28.795986    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:28.807054    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:28.807340    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:28.820291    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:28.820379    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:28.831098    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:28.831187    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:28.842056    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:28.842140    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:28.852614    3941 logs.go:276] 0 containers: []
	W0918 13:28:28.852627    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:28.852701    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:28.863373    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:28.863387    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:28.863393    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:28.896593    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:28.896604    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:28.910513    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:28.910527    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:28.925523    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:28.925532    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:28.939109    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:28.939122    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:28.951120    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:28.951131    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:28.968819    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:28.968833    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:28.984743    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:28.984758    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:29.009941    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:29.009948    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:29.014491    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:29.014499    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:29.056868    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:29.056880    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:29.071773    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:29.071783    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:29.082981    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:29.082992    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:31.595221    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:31.416190    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:28:31.416201    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:28:31.428261    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:28:31.428272    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:28:31.443312    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:31.443321    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:31.467958    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:31.467977    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:34.007891    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:36.597304    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:36.597519    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:36.615806    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:36.615926    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:36.631706    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:36.631801    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:36.647489    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:36.647575    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:36.658515    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:36.658597    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:36.672547    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:36.672619    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:36.682945    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:36.683030    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:36.693581    3941 logs.go:276] 0 containers: []
	W0918 13:28:36.693594    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:36.693680    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:36.705107    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:36.705122    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:36.705129    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:36.718734    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:36.718748    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:36.730592    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:36.730602    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:36.742150    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:36.742161    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:36.759167    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:36.759177    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:36.771404    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:36.771417    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:36.795502    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:36.795513    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:36.806862    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:36.806874    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:36.840233    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:36.840241    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:36.844613    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:36.844620    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:36.878700    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:36.878715    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:36.893647    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:36.893659    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:36.905594    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:36.905609    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:39.010104    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:39.010217    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:39.021968    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:28:39.022059    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:39.033210    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:28:39.033294    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:39.043275    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:28:39.043359    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:39.054344    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:28:39.054429    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:39.064718    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:28:39.064802    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:39.074755    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:28:39.074838    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:39.084976    3992 logs.go:276] 0 containers: []
	W0918 13:28:39.084987    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:39.085057    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:39.095398    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:28:39.095417    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:28:39.095424    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:28:39.109616    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:39.109626    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:39.145578    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:28:39.145589    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:28:39.162825    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:28:39.162839    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:28:39.174461    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:28:39.174470    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:28:39.186812    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:28:39.186823    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:28:39.198556    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:28:39.198565    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:28:39.216718    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:28:39.216730    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:28:39.229631    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:39.229643    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:39.254796    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:39.254806    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:39.289785    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:39.289793    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:39.293721    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:28:39.293730    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:28:39.308118    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:28:39.308128    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:39.422269    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:41.821684    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:44.424452    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:44.424627    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:44.442446    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:44.442559    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:44.456602    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:44.456697    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:44.469310    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:44.469396    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:44.480011    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:44.480091    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:44.490656    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:44.490748    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:44.501672    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:44.501759    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:44.512247    3941 logs.go:276] 0 containers: []
	W0918 13:28:44.512263    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:44.512327    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:44.526946    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:44.526962    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:44.526967    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:44.544699    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:44.544710    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:44.556204    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:44.556217    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:44.560866    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:44.560872    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:44.575381    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:44.575392    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:44.590281    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:44.590291    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:44.601407    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:44.601417    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:44.615866    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:44.615876    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:44.627772    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:44.627783    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:44.651407    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:44.651415    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:44.663164    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:44.663175    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:44.696892    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:44.696902    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:44.736919    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:44.736934    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:46.823251    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:46.823576    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:46.848348    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:28:46.848533    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:46.866964    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:28:46.867086    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:46.879725    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:28:46.879824    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:46.891600    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:28:46.891703    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:46.902262    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:28:46.902367    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:46.912487    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:28:46.912578    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:46.926486    3992 logs.go:276] 0 containers: []
	W0918 13:28:46.926500    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:46.926580    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:46.937514    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:28:46.937533    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:28:46.937539    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:28:46.948995    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:46.949005    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:46.972722    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:46.972734    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:46.977208    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:28:46.977215    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:28:46.988843    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:28:46.988853    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:28:47.004300    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:28:47.004312    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:28:47.021807    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:28:47.021820    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:28:47.036047    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:28:47.036058    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:28:47.051272    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:28:47.051287    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:47.063398    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:47.063413    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:47.097167    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:47.097178    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:47.133771    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:28:47.133783    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:28:47.148210    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:28:47.148221    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:28:49.664225    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:47.250666    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:54.666411    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:54.666595    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:54.679607    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:28:54.679696    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:54.691123    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:28:54.691211    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:54.701508    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:28:54.701589    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:54.712047    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:28:54.712132    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:54.722533    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:28:54.722620    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:54.732869    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:28:54.732957    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:54.743135    3992 logs.go:276] 0 containers: []
	W0918 13:28:54.743147    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:54.743227    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:54.754937    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:28:54.754953    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:28:54.754958    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:28:54.767196    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:28:54.767206    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:28:54.786203    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:28:54.786214    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:28:54.797033    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:54.797044    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:54.830730    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:28:54.830739    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:28:54.846846    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:28:54.846857    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:28:54.860548    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:28:54.860558    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:28:54.872374    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:28:54.872387    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:28:54.900629    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:54.900639    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:54.925486    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:28:54.925496    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:54.937646    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:54.937657    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:54.942187    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:54.942199    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:54.977025    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:28:54.977036    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:28:52.252748    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:52.253022    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:52.275002    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:28:52.275127    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:52.291300    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:28:52.291399    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:52.303650    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:28:52.303738    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:52.314573    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:28:52.314660    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:52.324845    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:28:52.324938    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:52.335441    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:28:52.335523    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:52.345387    3941 logs.go:276] 0 containers: []
	W0918 13:28:52.345398    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:52.345471    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:52.356413    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:28:52.356428    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:28:52.356434    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:28:52.377189    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:52.377202    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:52.381804    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:52.381810    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:52.420149    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:28:52.420160    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:28:52.434384    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:28:52.434394    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:28:52.446521    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:28:52.446532    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:28:52.458401    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:28:52.458412    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:28:52.470015    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:52.470027    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:52.494090    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:28:52.494098    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:52.505755    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:52.505766    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:52.538767    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:28:52.538777    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:28:52.566134    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:28:52.566144    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:28:52.581716    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:28:52.581728    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:28:55.098480    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:57.489911    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:00.100572    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:00.100769    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:00.115844    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:00.115947    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:00.128060    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:00.128146    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:00.139107    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:00.139191    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:00.149726    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:00.149802    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:00.160286    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:00.160375    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:00.171287    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:00.171357    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:00.181955    3941 logs.go:276] 0 containers: []
	W0918 13:29:00.181971    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:00.182030    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:00.192418    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:00.192439    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:00.192445    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:00.226694    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:00.226711    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:00.241083    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:00.241095    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:00.265777    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:00.265785    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:00.277766    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:00.277777    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:00.294107    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:00.294117    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:00.305437    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:00.305449    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:00.322448    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:00.322459    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:00.355838    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:00.355848    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:00.360450    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:00.360457    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:00.374790    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:00.374799    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:00.386878    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:00.386889    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:00.399321    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:00.399332    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:02.492029    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:02.492290    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:02.509531    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:02.509631    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:02.522335    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:02.522429    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:02.533541    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:02.533631    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:02.543818    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:02.543901    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:02.554356    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:02.554435    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:02.564733    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:02.564817    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:02.575206    3992 logs.go:276] 0 containers: []
	W0918 13:29:02.575216    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:02.575281    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:02.585996    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:02.586012    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:02.586018    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:02.597629    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:02.597640    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:02.612615    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:02.612628    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:02.625101    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:02.625114    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:02.636584    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:02.636594    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:02.660032    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:02.660041    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:02.671772    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:02.671782    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:02.708548    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:02.708559    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:02.723591    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:02.723602    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:02.737635    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:02.737649    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:02.748757    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:02.748767    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:02.765679    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:02.765691    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:02.798643    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:02.798651    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:05.305056    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:02.912607    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:10.307084    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:10.307566    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:10.321535    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:10.321635    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:10.333273    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:10.333349    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:10.346448    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:10.346519    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:10.356763    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:10.356831    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:10.367433    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:10.367525    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:10.380092    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:10.380181    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:10.390456    3992 logs.go:276] 0 containers: []
	W0918 13:29:10.390470    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:10.390545    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:10.400563    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:10.400578    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:10.400583    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:10.416280    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:10.416290    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:10.427681    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:10.427690    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:10.463920    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:10.463934    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:10.499216    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:10.499227    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:10.513500    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:10.513510    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:10.527569    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:10.527585    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:10.539254    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:10.539265    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:10.550830    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:10.550841    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:10.555487    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:10.555495    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:10.567640    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:10.567650    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:10.582708    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:10.582718    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:10.599678    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:10.599688    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:07.914436    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:07.914699    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:07.933414    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:07.933532    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:07.948785    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:07.948873    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:07.960542    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:07.960632    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:07.971323    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:07.971405    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:07.982393    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:07.982472    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:07.993456    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:07.993541    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:08.003849    3941 logs.go:276] 0 containers: []
	W0918 13:29:08.003861    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:08.003929    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:08.014994    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:08.015010    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:08.015016    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:08.028993    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:08.029009    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:08.042617    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:08.042628    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:08.058832    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:08.058844    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:08.070627    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:08.070640    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:08.095582    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:08.095592    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:08.106876    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:08.106885    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:08.141846    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:08.141857    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:08.146253    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:08.146258    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:08.187139    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:08.187154    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:08.199726    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:08.199737    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:08.218902    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:08.218915    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:08.237398    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:08.237410    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:10.750706    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:13.126578    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:15.752574    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:15.752790    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:15.770737    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:15.770844    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:15.785679    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:15.785767    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:15.796502    3941 logs.go:276] 2 containers: [90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:15.796593    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:15.811353    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:15.811429    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:15.822101    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:15.822191    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:15.833457    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:15.833532    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:15.843402    3941 logs.go:276] 0 containers: []
	W0918 13:29:15.843413    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:15.843481    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:15.854869    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:15.854885    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:15.854890    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:15.872240    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:15.872248    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:15.883956    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:15.883970    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:15.907592    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:15.907600    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:15.940457    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:15.940466    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:15.975765    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:15.975777    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:15.991175    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:15.991185    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:16.006069    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:16.006083    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:16.018400    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:16.018412    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:16.023324    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:16.023332    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:16.035606    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:16.035621    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:16.047270    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:16.047284    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:16.062694    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:16.062708    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:18.128798    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:18.128998    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:18.141230    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:18.141328    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:18.152232    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:18.152316    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:18.163086    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:18.163164    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:18.173561    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:18.173648    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:18.184203    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:18.184294    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:18.195296    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:18.195369    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:18.210276    3992 logs.go:276] 0 containers: []
	W0918 13:29:18.210288    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:18.210363    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:18.221106    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:18.221123    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:18.221129    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:18.235166    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:18.235178    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:18.250102    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:18.250113    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:18.268188    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:18.268199    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:18.280137    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:18.280151    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:18.315896    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:18.315905    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:18.320451    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:18.320460    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:18.356294    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:18.356303    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:18.370707    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:18.370718    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:18.384882    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:18.384892    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:18.396551    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:18.396566    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:18.409214    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:18.409226    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:18.433669    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:18.433678    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:20.947069    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:18.576106    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:25.949219    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:25.949423    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:25.966821    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:25.966927    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:25.981164    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:25.981257    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:25.992561    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:25.992638    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:26.003426    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:26.003502    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:26.015989    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:26.016075    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:26.029913    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:26.029991    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:26.040189    3992 logs.go:276] 0 containers: []
	W0918 13:29:26.040201    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:26.040267    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:26.057270    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:26.057285    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:26.057291    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:26.072097    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:26.072110    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:26.084034    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:26.084047    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:26.112921    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:26.112930    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:26.117187    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:26.117194    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:26.152539    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:26.152550    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:26.166946    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:26.166957    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:26.182205    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:26.182215    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:26.193473    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:26.193486    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:26.209368    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:26.209379    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:26.221532    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:26.221542    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:26.239363    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:26.239371    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:26.275812    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:26.275828    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:23.578187    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:23.578377    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:23.590394    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:23.590490    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:23.608750    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:23.608843    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:23.621118    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:23.621210    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:23.631466    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:23.631553    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:23.642063    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:23.642151    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:23.652585    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:23.652669    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:23.662850    3941 logs.go:276] 0 containers: []
	W0918 13:29:23.662862    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:23.662930    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:23.673279    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:23.673298    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:29:23.673303    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:29:23.684740    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:23.684753    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:23.697235    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:23.697247    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:23.710067    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:23.710080    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:23.725900    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:23.725911    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:23.751615    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:23.751626    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:23.785208    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:23.785216    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:23.799560    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:23.799571    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:23.813937    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:29:23.813948    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:29:23.825415    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:23.825425    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:23.843106    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:23.843116    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:23.847944    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:23.847951    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:23.885127    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:23.885138    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:23.897191    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:23.897206    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:23.909736    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:23.909749    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:26.426434    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:28.789467    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:31.428564    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:31.428771    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:31.444391    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:31.444487    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:31.457782    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:31.457866    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:31.470785    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:31.470879    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:31.482315    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:31.482422    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:31.493177    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:31.493257    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:31.504207    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:31.504282    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:31.515087    3941 logs.go:276] 0 containers: []
	W0918 13:29:31.515102    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:31.515171    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:31.531541    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:31.531557    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:31.531564    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:31.536256    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:31.536262    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:31.550562    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:29:31.550575    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:29:31.562052    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:31.562062    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:31.579921    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:31.579935    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:31.612299    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:31.612307    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:31.624022    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:31.624037    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:31.635977    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:31.635987    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:31.651276    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:31.651289    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:31.663010    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:31.663023    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:31.677329    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:31.677343    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:31.689180    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:31.689196    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:31.705799    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:31.705816    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:31.741106    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:31.741118    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:31.767041    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:29:31.767051    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:29:33.791620    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:33.791761    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:33.805900    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:33.806010    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:33.817333    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:33.817405    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:33.827526    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:33.827609    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:33.838083    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:33.838155    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:33.848694    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:33.848762    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:33.859392    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:33.859473    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:33.869753    3992 logs.go:276] 0 containers: []
	W0918 13:29:33.869766    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:33.869837    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:33.879970    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:33.879984    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:33.879991    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:33.919124    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:33.919133    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:33.933357    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:33.933367    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:33.967454    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:33.967464    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:33.972156    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:33.972162    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:33.986197    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:33.986208    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:33.997983    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:33.997996    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:34.009899    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:34.009909    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:34.024197    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:34.024205    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:34.036407    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:34.036423    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:34.054341    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:34.054351    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:34.068557    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:34.068566    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:34.093319    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:34.093332    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:34.279596    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:36.606687    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:39.280761    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:39.280990    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:39.301084    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:39.301200    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:39.315666    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:39.315748    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:39.328017    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:39.328112    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:39.339420    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:39.339507    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:39.350342    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:39.350423    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:39.361458    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:39.361540    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:39.371362    3941 logs.go:276] 0 containers: []
	W0918 13:29:39.371378    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:39.371449    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:39.381537    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:39.381555    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:39.381561    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:39.394373    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:39.394385    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:39.406959    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:29:39.406972    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:29:39.418659    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:39.418674    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:39.430490    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:39.430501    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:39.465873    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:39.465887    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:39.480666    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:39.480676    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:39.499712    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:39.499725    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:39.515295    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:39.515305    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:39.549925    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:39.549935    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:39.554691    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:39.554700    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:39.566317    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:39.566329    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:39.590964    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:39.590972    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:39.608088    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:29:39.608097    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:29:39.620062    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:39.620076    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:42.133375    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:41.608968    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:41.609205    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:41.626019    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:41.626123    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:41.639696    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:41.639778    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:41.651543    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:41.651629    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:41.662409    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:41.662490    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:41.672822    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:41.672909    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:41.683065    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:41.683143    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:41.693305    3992 logs.go:276] 0 containers: []
	W0918 13:29:41.693316    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:41.693387    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:41.704100    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:41.704117    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:41.704124    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:41.708981    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:41.708989    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:41.723513    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:41.723529    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:41.738206    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:41.738216    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:41.749733    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:41.749745    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:41.766972    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:41.766986    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:41.778629    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:41.778644    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:41.802085    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:41.802105    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:41.838376    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:41.838389    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:41.874848    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:41.874859    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:41.889472    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:41.889482    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:41.900905    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:41.900916    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:41.912457    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:41.912469    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:44.427008    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:47.135583    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:47.135904    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:47.157673    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:47.157798    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:49.429150    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:49.429368    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:49.441338    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:49.441427    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:49.457751    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:49.457835    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:49.468578    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:29:49.468664    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:49.479085    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:49.479168    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:49.489871    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:49.489941    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:49.500585    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:49.500711    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:49.511075    3992 logs.go:276] 0 containers: []
	W0918 13:29:49.511087    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:49.511154    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:49.523723    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:49.523739    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:49.523746    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:49.558075    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:29:49.558090    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:29:49.570127    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:49.570138    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:49.581711    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:49.581724    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:49.599045    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:49.599059    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:49.624408    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:49.624416    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:49.628730    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:49.628737    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:49.640616    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:49.640631    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:49.655299    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:49.655308    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:49.667418    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:49.667430    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:49.701134    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:49.701149    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:49.714975    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:49.714989    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:49.726195    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:49.726209    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:49.740455    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:29:49.740469    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:29:49.752046    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:49.752060    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:47.182940    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:47.183038    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:47.196036    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:47.196113    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:47.206575    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:47.206665    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:47.217014    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:47.217095    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:47.227268    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:47.227351    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:47.237929    3941 logs.go:276] 0 containers: []
	W0918 13:29:47.237941    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:47.238013    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:47.248525    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:47.248543    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:47.248548    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:47.262957    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:47.262970    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:47.279067    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:29:47.279080    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:29:47.291784    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:47.291794    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:47.316487    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:47.316496    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:47.350487    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:47.350495    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:47.386109    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:47.386122    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:47.400225    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:29:47.400240    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:29:47.411733    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:47.411747    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:47.416120    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:47.416127    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:47.427778    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:47.427792    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:47.445001    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:47.445017    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:47.457595    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:47.457611    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:47.469771    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:47.469786    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:47.481169    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:47.481191    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:49.996902    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:52.268406    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:54.998975    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:54.999135    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:55.012597    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:29:55.012690    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:55.023365    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:29:55.023454    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:55.033939    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:29:55.034034    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:55.044596    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:29:55.044681    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:55.057906    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:29:55.057986    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:55.069117    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:29:55.069198    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:55.079236    3941 logs.go:276] 0 containers: []
	W0918 13:29:55.079248    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:55.079320    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:55.089514    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:29:55.089530    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:29:55.089535    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:29:55.103535    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:29:55.103544    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:29:55.119168    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:29:55.119180    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:55.130677    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:29:55.130688    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:29:55.142948    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:29:55.142963    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:29:55.155164    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:55.155180    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:55.178970    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:55.178977    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:55.220129    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:29:55.220144    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:29:55.235141    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:29:55.235151    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:29:55.246724    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:29:55.246735    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:29:55.258758    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:29:55.258771    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:29:55.281811    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:55.281822    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:55.316668    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:55.316679    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:55.320912    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:29:55.320919    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:29:55.334916    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:29:55.334931    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:29:57.270786    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:57.271047    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:57.290026    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:57.290129    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:57.306659    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:57.306738    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:57.317472    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:29:57.317558    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:57.329983    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:57.330060    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:57.340800    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:57.340877    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:57.351513    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:57.351585    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:57.362121    3992 logs.go:276] 0 containers: []
	W0918 13:29:57.362134    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:57.362204    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:57.373237    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:57.373256    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:57.373261    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:57.387982    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:29:57.387995    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:29:57.401806    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:57.401818    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:57.413570    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:57.413580    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:57.425603    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:57.425613    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:57.437505    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:57.437517    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:57.450037    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:57.450048    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:57.484469    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:29:57.484481    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:29:57.501847    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:57.501858    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:57.538002    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:57.538013    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:57.552738    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:57.552751    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:57.564667    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:57.564677    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:57.583040    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:57.583050    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:57.607010    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:57.607021    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:57.612213    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:57.612224    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:00.139639    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:57.848711    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:05.141756    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:05.141882    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:05.153287    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:05.153371    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:05.163964    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:05.164046    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:05.177812    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:05.177895    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:05.194258    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:05.194349    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:05.206406    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:05.206483    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:05.216612    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:05.216688    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:05.226357    3992 logs.go:276] 0 containers: []
	W0918 13:30:05.226369    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:05.226437    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:05.236642    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:05.236661    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:05.236667    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:05.258765    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:05.258775    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:05.270189    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:05.270202    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:05.282268    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:05.282278    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:05.299872    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:05.299883    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:05.315251    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:05.315263    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:05.332976    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:05.332986    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:05.344633    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:05.344645    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:05.358415    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:05.358429    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:05.370347    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:05.370362    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:05.381710    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:05.381720    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:05.405778    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:05.405787    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:05.438919    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:05.438931    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:05.443458    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:05.443465    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:05.480592    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:05.480604    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:02.850868    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:02.851136    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:02.875761    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:02.875894    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:02.892196    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:02.892297    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:02.905019    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:02.905116    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:02.915969    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:02.916054    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:02.926701    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:02.926786    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:02.938228    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:02.938304    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:02.948996    3941 logs.go:276] 0 containers: []
	W0918 13:30:02.949007    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:02.949073    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:02.996083    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:02.996102    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:02.996107    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:03.010001    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:03.010012    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:03.028476    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:03.028485    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:03.039717    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:03.039729    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:03.052150    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:03.052161    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:03.067693    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:03.067705    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:03.082517    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:03.082529    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:03.087037    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:03.087044    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:03.100749    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:03.100760    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:03.126005    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:03.126015    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:03.137592    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:03.137603    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:03.152515    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:03.152526    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:03.188057    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:03.188069    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:03.206158    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:03.206169    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:03.240595    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:03.240607    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:05.755138    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:07.997045    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:10.757368    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:10.757654    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:10.779458    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:10.779595    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:10.796253    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:10.796339    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:10.809422    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:10.809509    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:10.824714    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:10.824795    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:10.835758    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:10.835834    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:10.848729    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:10.848796    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:10.858948    3941 logs.go:276] 0 containers: []
	W0918 13:30:10.858960    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:10.859027    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:10.870213    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:10.870232    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:10.870237    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:10.882227    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:10.882238    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:10.897084    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:10.897098    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:10.909002    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:10.909014    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:10.921007    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:10.921017    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:10.932151    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:10.932163    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:10.966553    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:10.966564    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:10.978944    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:10.978956    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:11.000763    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:11.000777    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:11.015570    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:11.015583    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:11.020945    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:11.020957    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:11.032694    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:11.032706    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:11.058205    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:11.058225    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:11.094295    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:11.094306    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:11.112538    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:11.112550    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:12.999175    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:12.999416    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:13.014212    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:13.014309    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:13.027221    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:13.027323    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:13.038721    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:13.038808    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:13.053341    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:13.053427    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:13.063966    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:13.064054    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:13.075684    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:13.075770    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:13.086593    3992 logs.go:276] 0 containers: []
	W0918 13:30:13.086604    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:13.086677    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:13.097448    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:13.097465    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:13.097471    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:13.113344    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:13.113355    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:13.124879    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:13.124889    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:13.139245    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:13.139255    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:13.157758    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:13.157769    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:13.193082    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:13.193093    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:13.227827    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:13.227838    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:13.239594    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:13.239607    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:13.252752    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:13.252767    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:13.265358    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:13.265373    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:13.276881    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:13.276897    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:13.281052    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:13.281059    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:13.295201    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:13.295213    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:13.307358    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:13.307372    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:13.319532    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:13.319547    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:15.845742    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:13.629361    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:20.848020    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:20.848223    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:20.869294    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:20.869394    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:20.880240    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:20.880328    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:20.891147    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:20.891230    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:20.905324    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:20.905414    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:20.919505    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:20.919593    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:20.929849    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:20.929934    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:20.940992    3992 logs.go:276] 0 containers: []
	W0918 13:30:20.941004    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:20.941071    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:20.951489    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:20.951505    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:20.951511    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:20.965713    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:20.965724    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:20.980237    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:20.980253    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:20.993578    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:20.993589    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:21.009592    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:21.009604    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:21.024229    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:21.024240    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:21.036507    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:21.036517    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:21.054447    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:21.054460    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:21.089577    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:21.089592    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:21.102538    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:21.102549    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:21.128511    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:21.128522    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:21.140864    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:21.140880    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:21.145687    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:21.145695    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:21.157390    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:21.157402    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:21.168784    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:21.168800    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:18.631505    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:18.631782    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:18.649907    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:18.650002    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:18.664205    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:18.664288    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:18.675778    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:18.675868    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:18.686747    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:18.686834    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:18.702125    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:18.702202    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:18.712538    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:18.712625    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:18.722867    3941 logs.go:276] 0 containers: []
	W0918 13:30:18.722878    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:18.722952    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:18.734605    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:18.734621    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:18.734626    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:18.769335    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:18.769345    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:18.781306    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:18.781317    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:18.792896    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:18.792906    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:18.807821    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:18.807832    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:18.825164    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:18.825174    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:18.850793    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:18.850803    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:18.855657    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:18.855663    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:18.870000    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:18.870015    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:18.884704    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:18.884716    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:18.896276    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:18.896285    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:18.912190    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:18.912200    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:18.925196    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:18.925210    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:18.960465    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:18.960476    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:18.973215    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:18.973228    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:21.487285    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:23.709023    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:26.489525    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:26.489762    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:26.507090    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:26.507205    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:26.520278    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:26.520356    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:26.531754    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:26.531840    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:26.542308    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:26.542397    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:26.553049    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:26.553139    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:26.564588    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:26.564664    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:26.578839    3941 logs.go:276] 0 containers: []
	W0918 13:30:26.578850    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:26.578917    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:26.588987    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:26.589004    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:26.589009    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:26.600681    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:26.600695    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:26.612505    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:26.612519    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:26.627442    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:26.627457    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:26.644884    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:26.644899    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:26.658608    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:26.658621    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:26.693917    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:26.693929    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:26.706033    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:26.706047    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:26.717696    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:26.717710    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:26.730224    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:26.730236    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:26.734726    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:26.734734    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:26.749298    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:26.749311    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:26.774473    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:26.774480    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:26.808124    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:26.808131    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:26.819942    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:26.819951    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:28.711262    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:28.711537    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:28.733277    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:28.733407    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:28.749248    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:28.749355    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:28.762689    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:28.762770    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:28.773823    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:28.773906    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:28.784039    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:28.784122    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:28.794345    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:28.794421    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:28.804464    3992 logs.go:276] 0 containers: []
	W0918 13:30:28.804475    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:28.804540    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:28.814843    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:28.814864    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:28.814870    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:28.826615    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:28.826625    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:28.851547    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:28.851562    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:28.886289    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:28.886302    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:28.900826    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:28.900838    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:28.914522    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:28.914532    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:28.926726    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:28.926736    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:28.938796    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:28.938808    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:28.951141    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:28.951152    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:28.965595    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:28.965612    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:28.983697    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:28.983707    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:29.018502    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:29.018519    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:29.030592    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:29.030604    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:29.042338    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:29.042348    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:29.047154    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:29.047161    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:29.334322    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:31.561012    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:34.336016    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:34.336231    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:34.353641    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:34.353753    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:34.367894    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:34.367987    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:34.381552    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:34.381630    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:34.392334    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:34.392417    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:34.403611    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:34.403694    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:34.420113    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:34.420188    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:34.430460    3941 logs.go:276] 0 containers: []
	W0918 13:30:34.430475    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:34.430547    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:34.441285    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:34.441302    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:34.441310    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:34.446450    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:34.446460    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:34.461841    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:34.461857    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:34.476246    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:34.476257    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:34.488037    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:34.488048    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:34.501082    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:34.501096    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:34.512548    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:34.512561    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:34.535321    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:34.535328    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:34.567441    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:34.567450    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:34.578801    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:34.578810    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:34.590292    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:34.590305    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:34.604839    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:34.604852    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:34.616165    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:34.616179    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:34.650253    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:34.650267    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:34.669033    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:34.669047    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:36.563260    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:36.563584    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:36.591245    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:36.591400    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:36.608266    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:36.608375    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:36.621793    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:36.621892    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:36.633232    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:36.633316    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:36.643599    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:36.643680    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:36.654090    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:36.654172    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:36.664352    3992 logs.go:276] 0 containers: []
	W0918 13:30:36.664366    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:36.664442    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:36.674826    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:36.674848    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:36.674854    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:36.693206    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:36.693217    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:36.704718    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:36.704728    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:36.709028    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:36.709034    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:36.723113    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:36.723123    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:36.734728    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:36.734742    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:36.769852    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:36.769860    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:36.794664    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:36.794673    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:36.810914    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:36.810925    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:36.822325    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:36.822335    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:36.834294    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:36.834304    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:36.848504    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:36.848514    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:36.860540    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:36.860553    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:36.898223    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:36.898234    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:36.913050    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:36.913059    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:39.430249    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:37.181884    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:44.432484    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:44.432622    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:44.447365    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:44.447461    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:44.459775    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:44.459868    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:44.470631    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:44.470709    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:44.481444    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:44.481516    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:44.492029    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:44.492110    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:44.502644    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:44.502721    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:44.512758    3992 logs.go:276] 0 containers: []
	W0918 13:30:44.512770    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:44.512847    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:44.523074    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:44.523091    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:44.523096    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:44.535263    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:44.535272    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:44.569597    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:44.569613    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:44.584244    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:44.584259    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:44.596157    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:44.596168    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:44.630671    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:44.630679    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:44.647317    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:44.647332    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:44.658458    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:44.658470    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:44.673633    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:44.673649    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:44.685048    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:44.685061    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:44.697504    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:44.697519    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:44.701749    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:44.701755    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:44.713774    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:44.713783    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:44.730752    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:44.730763    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:44.754897    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:44.754906    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:42.183975    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:42.184300    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:42.212855    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:42.212990    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:42.250748    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:42.250842    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:42.268863    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:42.268955    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:42.282361    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:42.282452    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:42.292845    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:42.292923    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:42.303660    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:42.303736    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:42.313692    3941 logs.go:276] 0 containers: []
	W0918 13:30:42.313708    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:42.313786    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:42.324400    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:42.324415    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:42.324421    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:42.329591    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:42.329601    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:42.341550    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:42.341560    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:42.356294    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:42.356303    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:42.381329    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:42.381341    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:42.393791    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:42.393807    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:42.427245    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:42.427256    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:42.443191    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:42.443203    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:42.461994    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:42.462005    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:42.476499    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:42.476509    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:42.488743    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:42.488754    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:42.500819    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:42.500831    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:42.512238    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:42.512250    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:42.546839    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:42.546854    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:42.559139    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:42.559150    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:45.072818    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:47.268728    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:50.074893    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:50.075153    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:50.095589    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:50.095705    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:50.109479    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:50.109554    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:50.121542    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:50.121624    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:50.132039    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:50.132113    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:50.142539    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:50.142613    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:50.153967    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:50.154049    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:50.164722    3941 logs.go:276] 0 containers: []
	W0918 13:30:50.164733    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:50.164797    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:50.175287    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:50.175304    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:50.175309    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:50.187285    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:50.187296    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:50.205441    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:50.205452    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:50.218458    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:50.218473    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:50.254379    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:50.254388    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:50.266205    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:50.266216    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:50.278387    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:50.278397    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:50.290219    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:50.290228    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:50.302130    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:50.302140    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:50.336861    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:50.336873    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:50.352519    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:50.352529    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:50.366765    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:50.366775    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:50.378595    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:50.378605    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:50.383399    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:50.383407    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:50.403320    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:50.403335    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:52.269376    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:52.269771    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:52.299981    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:52.300135    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:52.317755    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:52.317856    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:52.331964    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:52.332061    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:52.343790    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:52.343876    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:52.354379    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:52.354463    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:52.364800    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:52.364888    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:52.379185    3992 logs.go:276] 0 containers: []
	W0918 13:30:52.379196    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:52.379266    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:52.390138    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:52.390165    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:52.390171    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:52.394976    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:52.394983    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:52.409261    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:52.409275    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:52.424944    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:52.424957    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:52.437104    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:52.437119    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:52.473553    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:52.473569    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:52.507741    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:52.507750    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:52.518901    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:52.518916    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:52.536348    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:52.536358    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:52.551358    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:52.551372    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:52.563339    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:52.563350    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:52.574816    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:52.574826    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:52.589717    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:52.589726    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:52.600763    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:52.600773    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:52.625503    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:52.625511    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:55.140051    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:52.928660    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:31:00.142256    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:31:00.142587    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:31:00.171209    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:31:00.171350    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:31:00.188547    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:31:00.188654    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:31:00.202281    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:31:00.202374    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:31:00.214676    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:31:00.214762    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:31:00.225292    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:31:00.225379    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:31:00.236956    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:31:00.237028    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:31:00.247756    3992 logs.go:276] 0 containers: []
	W0918 13:31:00.247767    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:31:00.247832    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:31:00.258464    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:31:00.258486    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:31:00.258492    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:31:00.270576    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:31:00.270586    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:31:00.288282    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:31:00.288292    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:31:00.299523    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:31:00.299535    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:31:00.324122    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:31:00.324133    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:31:00.328107    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:31:00.328116    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:31:00.342235    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:31:00.342245    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:31:00.354100    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:31:00.354113    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:31:00.365352    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:31:00.365367    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:31:00.398805    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:31:00.398814    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:31:00.413105    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:31:00.413116    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:31:00.425069    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:31:00.425079    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:31:00.436923    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:31:00.436939    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:31:00.472206    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:31:00.472221    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:31:00.483944    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:31:00.483954    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:57.930846    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:57.931088    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:57.948838    3941 logs.go:276] 1 containers: [6423e48e15ad]
	I0918 13:30:57.948948    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:57.962096    3941 logs.go:276] 1 containers: [265441622d23]
	I0918 13:30:57.962177    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:57.976330    3941 logs.go:276] 4 containers: [7734b967a8ec 1f37b9eac4e0 90c2345e4c40 0c0e8c82d0b7]
	I0918 13:30:57.976405    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:57.986733    3941 logs.go:276] 1 containers: [27a45e1c1649]
	I0918 13:30:57.986802    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:57.998210    3941 logs.go:276] 1 containers: [b390e6cd5cd5]
	I0918 13:30:57.998300    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:58.009098    3941 logs.go:276] 1 containers: [ebcad777d59a]
	I0918 13:30:58.009190    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:58.019598    3941 logs.go:276] 0 containers: []
	W0918 13:30:58.019611    3941 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:58.019686    3941 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:58.030223    3941 logs.go:276] 1 containers: [617f27d98fd4]
	I0918 13:30:58.030241    3941 logs.go:123] Gathering logs for coredns [90c2345e4c40] ...
	I0918 13:30:58.030247    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90c2345e4c40"
	I0918 13:30:58.042163    3941 logs.go:123] Gathering logs for coredns [7734b967a8ec] ...
	I0918 13:30:58.042178    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7734b967a8ec"
	I0918 13:30:58.060356    3941 logs.go:123] Gathering logs for coredns [0c0e8c82d0b7] ...
	I0918 13:30:58.060368    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c0e8c82d0b7"
	I0918 13:30:58.082984    3941 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:58.082995    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:58.106503    3941 logs.go:123] Gathering logs for container status ...
	I0918 13:30:58.106511    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:58.118108    3941 logs.go:123] Gathering logs for etcd [265441622d23] ...
	I0918 13:30:58.118119    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 265441622d23"
	I0918 13:30:58.131987    3941 logs.go:123] Gathering logs for kube-apiserver [6423e48e15ad] ...
	I0918 13:30:58.132002    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6423e48e15ad"
	I0918 13:30:58.146826    3941 logs.go:123] Gathering logs for coredns [1f37b9eac4e0] ...
	I0918 13:30:58.146837    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f37b9eac4e0"
	I0918 13:30:58.158982    3941 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:58.158994    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:58.193208    3941 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:58.193217    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:58.197967    3941 logs.go:123] Gathering logs for kube-scheduler [27a45e1c1649] ...
	I0918 13:30:58.197977    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27a45e1c1649"
	I0918 13:30:58.212705    3941 logs.go:123] Gathering logs for kube-proxy [b390e6cd5cd5] ...
	I0918 13:30:58.212720    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b390e6cd5cd5"
	I0918 13:30:58.225895    3941 logs.go:123] Gathering logs for kube-controller-manager [ebcad777d59a] ...
	I0918 13:30:58.225906    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebcad777d59a"
	I0918 13:30:58.247013    3941 logs.go:123] Gathering logs for storage-provisioner [617f27d98fd4] ...
	I0918 13:30:58.247027    3941 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 617f27d98fd4"
	I0918 13:30:58.263100    3941 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:58.263111    3941 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:31:00.797573    3941 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:31:05.799836    3941 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:31:05.806029    3941 out.go:201] 
	W0918 13:31:05.809078    3941 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0918 13:31:05.809102    3941 out.go:270] * 
	W0918 13:31:05.810924    3941 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:31:05.824959    3941 out.go:201] 
	I0918 13:31:03.000842    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:31:08.003080    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:31:08.003351    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:31:08.025270    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:31:08.025383    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:31:08.040513    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:31:08.040603    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:31:08.053143    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:31:08.053236    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:31:08.065075    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:31:08.065165    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:31:08.100658    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:31:08.100750    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:31:08.114138    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:31:08.114220    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:31:08.124807    3992 logs.go:276] 0 containers: []
	W0918 13:31:08.124823    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:31:08.124891    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:31:08.135583    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:31:08.135603    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:31:08.135609    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:31:08.148157    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:31:08.148171    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:31:08.159644    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:31:08.159657    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:31:08.177164    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:31:08.177177    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:31:08.188940    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:31:08.188952    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:31:08.203726    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:31:08.203737    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:31:08.217843    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:31:08.217853    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:31:08.229380    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:31:08.229394    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:31:08.247394    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:31:08.247410    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:31:08.280987    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:31:08.280997    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:31:08.293728    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:31:08.293738    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:31:08.307650    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:31:08.307658    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:31:08.333253    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:31:08.333266    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:31:08.367388    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:31:08.367397    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:31:08.371704    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:31:08.371710    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:31:10.883763    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:31:15.886068    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:31:15.886348    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:31:15.908009    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:31:15.908157    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:31:15.927375    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:31:15.927462    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:31:15.939907    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:31:15.939997    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:31:15.950454    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:31:15.950538    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:31:15.964299    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:31:15.964383    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:31:15.975124    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:31:15.975209    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:31:15.985573    3992 logs.go:276] 0 containers: []
	W0918 13:31:15.985588    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:31:15.985659    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:31:15.996138    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:31:15.996159    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:31:15.996165    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:31:16.011061    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:31:16.011071    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:31:16.027434    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:31:16.027446    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:31:16.061148    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:31:16.061159    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:31:16.073588    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:31:16.073598    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:31:16.085748    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:31:16.085758    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:31:16.098513    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:31:16.098525    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:31:16.123598    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:31:16.123608    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:31:16.127841    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:31:16.127851    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:31:16.147219    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:31:16.147231    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:31:16.164980    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:31:16.164990    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:31:16.176750    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:31:16.176761    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:31:16.188282    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:31:16.188291    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:31:16.223295    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:31:16.223305    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:31:16.235445    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:31:16.235457    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:31:18.755344    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-09-18 20:21:51 UTC, ends at Wed 2024-09-18 20:31:21 UTC. --
	Sep 18 20:31:06 running-upgrade-314000 dockerd[3287]: time="2024-09-18T20:31:06.328566552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 18 20:31:06 running-upgrade-314000 dockerd[3287]: time="2024-09-18T20:31:06.328638382Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0147c95317cc7a76e1c6c7cd63eeecdd8d05f60427e26612f583c61801b8b096 pid=18851 runtime=io.containerd.runc.v2
	Sep 18 20:31:06 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:06Z" level=error msg="ContainerStats resp: {0x4000a16c00 linux}"
	Sep 18 20:31:06 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:06Z" level=error msg="ContainerStats resp: {0x40001deac0 linux}"
	Sep 18 20:31:07 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:07Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 18 20:31:07 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:07Z" level=error msg="ContainerStats resp: {0x4000393a40 linux}"
	Sep 18 20:31:08 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:08Z" level=error msg="ContainerStats resp: {0x4000a84500 linux}"
	Sep 18 20:31:08 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:08Z" level=error msg="ContainerStats resp: {0x4000a84940 linux}"
	Sep 18 20:31:08 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:08Z" level=error msg="ContainerStats resp: {0x4000a4c7c0 linux}"
	Sep 18 20:31:08 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:08Z" level=error msg="ContainerStats resp: {0x4000a85480 linux}"
	Sep 18 20:31:08 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:08Z" level=error msg="ContainerStats resp: {0x4000a85900 linux}"
	Sep 18 20:31:08 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:08Z" level=error msg="ContainerStats resp: {0x4000a85d80 linux}"
	Sep 18 20:31:08 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:08Z" level=error msg="ContainerStats resp: {0x4000a4d5c0 linux}"
	Sep 18 20:31:12 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:12Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 18 20:31:17 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:17Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 18 20:31:18 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:18Z" level=error msg="ContainerStats resp: {0x400099ea40 linux}"
	Sep 18 20:31:18 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:18Z" level=error msg="ContainerStats resp: {0x4000916600 linux}"
	Sep 18 20:31:19 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:19Z" level=error msg="ContainerStats resp: {0x40003928c0 linux}"
	Sep 18 20:31:20 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:20Z" level=error msg="ContainerStats resp: {0x40001df740 linux}"
	Sep 18 20:31:20 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:20Z" level=error msg="ContainerStats resp: {0x4000393680 linux}"
	Sep 18 20:31:20 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:20Z" level=error msg="ContainerStats resp: {0x40001dfc40 linux}"
	Sep 18 20:31:20 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:20Z" level=error msg="ContainerStats resp: {0x4000358d40 linux}"
	Sep 18 20:31:20 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:20Z" level=error msg="ContainerStats resp: {0x4000359300 linux}"
	Sep 18 20:31:20 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:20Z" level=error msg="ContainerStats resp: {0x40000b9e80 linux}"
	Sep 18 20:31:20 running-upgrade-314000 cri-dockerd[3128]: time="2024-09-18T20:31:20Z" level=error msg="ContainerStats resp: {0x4000798e00 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9f50f1b168c7b       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   c98325dd7d1f5
	0147c95317cc7       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   79bf3701bee04
	7734b967a8ec6       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   c98325dd7d1f5
	1f37b9eac4e0e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   79bf3701bee04
	617f27d98fd45       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   37d92f379ae32
	b390e6cd5cd5d       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   c1b928db2b159
	265441622d235       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   2186da0d7c574
	27a45e1c16491       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   9ac4336404d47
	ebcad777d59a8       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   c3152f3d613a7
	6423e48e15ad9       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   1050beff82735
	
	
	==> coredns [0147c95317cc] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3459257279632249991.1332947276740435945. HINFO: read udp 10.244.0.2:44377->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3459257279632249991.1332947276740435945. HINFO: read udp 10.244.0.2:53805->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3459257279632249991.1332947276740435945. HINFO: read udp 10.244.0.2:57232->10.0.2.3:53: i/o timeout
	
	
	==> coredns [1f37b9eac4e0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9219326902249045966.6447764745655765755. HINFO: read udp 10.244.0.2:47801->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9219326902249045966.6447764745655765755. HINFO: read udp 10.244.0.2:57059->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9219326902249045966.6447764745655765755. HINFO: read udp 10.244.0.2:56981->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9219326902249045966.6447764745655765755. HINFO: read udp 10.244.0.2:34387->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9219326902249045966.6447764745655765755. HINFO: read udp 10.244.0.2:55666->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9219326902249045966.6447764745655765755. HINFO: read udp 10.244.0.2:39128->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9219326902249045966.6447764745655765755. HINFO: read udp 10.244.0.2:35876->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9219326902249045966.6447764745655765755. HINFO: read udp 10.244.0.2:51089->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9219326902249045966.6447764745655765755. HINFO: read udp 10.244.0.2:51681->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9219326902249045966.6447764745655765755. HINFO: read udp 10.244.0.2:44197->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7734b967a8ec] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7361176836284884323.8160129454604743232. HINFO: read udp 10.244.0.3:33337->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7361176836284884323.8160129454604743232. HINFO: read udp 10.244.0.3:59790->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7361176836284884323.8160129454604743232. HINFO: read udp 10.244.0.3:54997->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7361176836284884323.8160129454604743232. HINFO: read udp 10.244.0.3:55045->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7361176836284884323.8160129454604743232. HINFO: read udp 10.244.0.3:53652->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7361176836284884323.8160129454604743232. HINFO: read udp 10.244.0.3:47538->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7361176836284884323.8160129454604743232. HINFO: read udp 10.244.0.3:60348->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7361176836284884323.8160129454604743232. HINFO: read udp 10.244.0.3:52417->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7361176836284884323.8160129454604743232. HINFO: read udp 10.244.0.3:45860->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7361176836284884323.8160129454604743232. HINFO: read udp 10.244.0.3:46588->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9f50f1b168c7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5144129222377924833.1120242971614528694. HINFO: read udp 10.244.0.3:53016->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5144129222377924833.1120242971614528694. HINFO: read udp 10.244.0.3:48424->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5144129222377924833.1120242971614528694. HINFO: read udp 10.244.0.3:47047->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-314000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-314000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=running-upgrade-314000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T13_27_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:27:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-314000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:31:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:27:05 +0000   Wed, 18 Sep 2024 20:27:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:27:05 +0000   Wed, 18 Sep 2024 20:27:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:27:05 +0000   Wed, 18 Sep 2024 20:27:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:27:05 +0000   Wed, 18 Sep 2024 20:27:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-314000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ac1522eb3da495693744f9fdeeb5a21
	  System UUID:                8ac1522eb3da495693744f9fdeeb5a21
	  Boot ID:                    6f6bb946-783b-486a-8025-4f4256d47e49
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-bqt8h                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-g4hgj                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-314000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-314000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-314000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-q5rg4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-314000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-314000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-314000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-314000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-314000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-314000 event: Registered Node running-upgrade-314000 in Controller
	
	
	==> dmesg <==
	[  +1.802138] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.064751] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.076094] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.136645] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.089571] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.087376] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.467669] systemd-fstab-generator[1283]: Ignoring "noauto" for root device
	[ +22.612227] systemd-fstab-generator[2005]: Ignoring "noauto" for root device
	[  +2.469402] systemd-fstab-generator[2277]: Ignoring "noauto" for root device
	[  +0.193640] systemd-fstab-generator[2316]: Ignoring "noauto" for root device
	[  +0.100132] systemd-fstab-generator[2327]: Ignoring "noauto" for root device
	[  +0.109875] systemd-fstab-generator[2340]: Ignoring "noauto" for root device
	[ +12.651569] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.211315] systemd-fstab-generator[3084]: Ignoring "noauto" for root device
	[  +0.085103] systemd-fstab-generator[3096]: Ignoring "noauto" for root device
	[  +0.082101] systemd-fstab-generator[3107]: Ignoring "noauto" for root device
	[  +0.072025] systemd-fstab-generator[3121]: Ignoring "noauto" for root device
	[  +2.412036] systemd-fstab-generator[3273]: Ignoring "noauto" for root device
	[  +4.076192] systemd-fstab-generator[3642]: Ignoring "noauto" for root device
	[  +1.131466] systemd-fstab-generator[3938]: Ignoring "noauto" for root device
	[Sep18 20:23] kauditd_printk_skb: 68 callbacks suppressed
	[Sep18 20:26] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.329766] systemd-fstab-generator[11985]: Ignoring "noauto" for root device
	[Sep18 20:27] systemd-fstab-generator[12591]: Ignoring "noauto" for root device
	[  +0.475350] systemd-fstab-generator[12725]: Ignoring "noauto" for root device
	
	
	==> etcd [265441622d23] <==
	{"level":"info","ts":"2024-09-18T20:27:00.105Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-18T20:27:00.105Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-18T20:27:00.101Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-18T20:27:00.106Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-18T20:27:00.105Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-18T20:27:00.105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-18T20:27:00.108Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-18T20:27:01.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-18T20:27:01.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-18T20:27:01.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-18T20:27:01.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-18T20:27:01.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-18T20:27:01.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-18T20:27:01.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-18T20:27:01.092Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-314000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T20:27:01.092Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:27:01.092Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:27:01.093Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:27:01.093Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:27:01.093Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T20:27:01.093Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T20:27:01.093Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T20:27:01.093Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T20:27:01.093Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T20:27:01.094Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 20:31:22 up 9 min,  0 users,  load average: 0.14, 0.30, 0.18
	Linux running-upgrade-314000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [6423e48e15ad] <==
	I0918 20:27:02.295252       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0918 20:27:02.295328       1 cache.go:39] Caches are synced for autoregister controller
	I0918 20:27:02.295259       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 20:27:02.295266       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0918 20:27:02.305856       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0918 20:27:02.307502       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0918 20:27:02.313850       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0918 20:27:03.030620       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0918 20:27:03.197081       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0918 20:27:03.198860       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0918 20:27:03.198873       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0918 20:27:03.319023       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 20:27:03.331490       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0918 20:27:03.380278       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0918 20:27:03.382143       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0918 20:27:03.382505       1 controller.go:611] quota admission added evaluator for: endpoints
	I0918 20:27:03.383717       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 20:27:04.328625       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0918 20:27:05.029314       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0918 20:27:05.032993       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0918 20:27:05.037569       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0918 20:27:05.106318       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 20:27:17.686258       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0918 20:27:17.934464       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0918 20:27:18.291416       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [ebcad777d59a] <==
	I0918 20:27:17.161537       1 shared_informer.go:262] Caches are synced for deployment
	I0918 20:27:17.162817       1 shared_informer.go:262] Caches are synced for HPA
	I0918 20:27:17.178458       1 shared_informer.go:262] Caches are synced for stateful set
	I0918 20:27:17.178489       1 shared_informer.go:262] Caches are synced for persistent volume
	I0918 20:27:17.178544       1 shared_informer.go:262] Caches are synced for disruption
	I0918 20:27:17.178548       1 disruption.go:371] Sending events to api server.
	I0918 20:27:17.182978       1 shared_informer.go:262] Caches are synced for endpoint
	I0918 20:27:17.183166       1 shared_informer.go:262] Caches are synced for GC
	I0918 20:27:17.183770       1 shared_informer.go:262] Caches are synced for ephemeral
	I0918 20:27:17.237619       1 shared_informer.go:262] Caches are synced for resource quota
	I0918 20:27:17.247512       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0918 20:27:17.286576       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0918 20:27:17.286598       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0918 20:27:17.287655       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0918 20:27:17.287661       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0918 20:27:17.288774       1 shared_informer.go:262] Caches are synced for resource quota
	I0918 20:27:17.330036       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0918 20:27:17.433614       1 shared_informer.go:262] Caches are synced for attach detach
	I0918 20:27:17.690962       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q5rg4"
	I0918 20:27:17.805031       1 shared_informer.go:262] Caches are synced for garbage collector
	I0918 20:27:17.881199       1 shared_informer.go:262] Caches are synced for garbage collector
	I0918 20:27:17.881291       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0918 20:27:17.935780       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0918 20:27:18.187462       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-g4hgj"
	I0918 20:27:18.191024       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-bqt8h"
	
	
	==> kube-proxy [b390e6cd5cd5] <==
	I0918 20:27:18.272033       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0918 20:27:18.272159       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0918 20:27:18.272184       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0918 20:27:18.289521       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0918 20:27:18.289531       1 server_others.go:206] "Using iptables Proxier"
	I0918 20:27:18.289546       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0918 20:27:18.289689       1 server.go:661] "Version info" version="v1.24.1"
	I0918 20:27:18.289694       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 20:27:18.290312       1 config.go:317] "Starting service config controller"
	I0918 20:27:18.290317       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0918 20:27:18.290325       1 config.go:226] "Starting endpoint slice config controller"
	I0918 20:27:18.290326       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0918 20:27:18.290509       1 config.go:444] "Starting node config controller"
	I0918 20:27:18.290511       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0918 20:27:18.391322       1 shared_informer.go:262] Caches are synced for node config
	I0918 20:27:18.391341       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0918 20:27:18.391322       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [27a45e1c1649] <==
	W0918 20:27:02.276350       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 20:27:02.276368       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0918 20:27:02.276399       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 20:27:02.276427       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0918 20:27:02.276455       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 20:27:02.276471       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0918 20:27:02.276537       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 20:27:02.276556       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0918 20:27:02.276604       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 20:27:02.276622       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0918 20:27:02.276646       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 20:27:02.276674       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0918 20:27:02.276706       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 20:27:02.276724       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0918 20:27:02.276763       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 20:27:02.276781       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0918 20:27:03.109794       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 20:27:03.109807       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0918 20:27:03.180335       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 20:27:03.180354       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0918 20:27:03.194212       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 20:27:03.194266       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0918 20:27:03.308773       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 20:27:03.308793       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0918 20:27:06.072987       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-09-18 20:21:51 UTC, ends at Wed 2024-09-18 20:31:22 UTC. --
	Sep 18 20:27:06 running-upgrade-314000 kubelet[12597]: E0918 20:27:06.858584   12597 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-314000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-314000"
	Sep 18 20:27:07 running-upgrade-314000 kubelet[12597]: E0918 20:27:07.058262   12597 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-314000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-314000"
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: I0918 20:27:17.159613   12597 topology_manager.go:200] "Topology Admit Handler"
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: I0918 20:27:17.198023   12597 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xjd9\" (UniqueName: \"kubernetes.io/projected/de3ea1f2-c19e-4f16-9093-41cbf4c8aede-kube-api-access-9xjd9\") pod \"storage-provisioner\" (UID: \"de3ea1f2-c19e-4f16-9093-41cbf4c8aede\") " pod="kube-system/storage-provisioner"
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: I0918 20:27:17.198065   12597 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/de3ea1f2-c19e-4f16-9093-41cbf4c8aede-tmp\") pod \"storage-provisioner\" (UID: \"de3ea1f2-c19e-4f16-9093-41cbf4c8aede\") " pod="kube-system/storage-provisioner"
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: I0918 20:27:17.198104   12597 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: I0918 20:27:17.198368   12597 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: E0918 20:27:17.302034   12597 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: E0918 20:27:17.302076   12597 projected.go:192] Error preparing data for projected volume kube-api-access-9xjd9 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: E0918 20:27:17.302163   12597 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/de3ea1f2-c19e-4f16-9093-41cbf4c8aede-kube-api-access-9xjd9 podName:de3ea1f2-c19e-4f16-9093-41cbf4c8aede nodeName:}" failed. No retries permitted until 2024-09-18 20:27:17.802134092 +0000 UTC m=+12.784776957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9xjd9" (UniqueName: "kubernetes.io/projected/de3ea1f2-c19e-4f16-9093-41cbf4c8aede-kube-api-access-9xjd9") pod "storage-provisioner" (UID: "de3ea1f2-c19e-4f16-9093-41cbf4c8aede") : configmap "kube-root-ca.crt" not found
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: I0918 20:27:17.696439   12597 topology_manager.go:200] "Topology Admit Handler"
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: I0918 20:27:17.805358   12597 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8cb6ca3-41bf-48ac-88aa-10b800af58ea-xtables-lock\") pod \"kube-proxy-q5rg4\" (UID: \"e8cb6ca3-41bf-48ac-88aa-10b800af58ea\") " pod="kube-system/kube-proxy-q5rg4"
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: I0918 20:27:17.805384   12597 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8cb6ca3-41bf-48ac-88aa-10b800af58ea-lib-modules\") pod \"kube-proxy-q5rg4\" (UID: \"e8cb6ca3-41bf-48ac-88aa-10b800af58ea\") " pod="kube-system/kube-proxy-q5rg4"
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: I0918 20:27:17.805397   12597 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfxc2\" (UniqueName: \"kubernetes.io/projected/e8cb6ca3-41bf-48ac-88aa-10b800af58ea-kube-api-access-kfxc2\") pod \"kube-proxy-q5rg4\" (UID: \"e8cb6ca3-41bf-48ac-88aa-10b800af58ea\") " pod="kube-system/kube-proxy-q5rg4"
	Sep 18 20:27:17 running-upgrade-314000 kubelet[12597]: I0918 20:27:17.805417   12597 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8cb6ca3-41bf-48ac-88aa-10b800af58ea-kube-proxy\") pod \"kube-proxy-q5rg4\" (UID: \"e8cb6ca3-41bf-48ac-88aa-10b800af58ea\") " pod="kube-system/kube-proxy-q5rg4"
	Sep 18 20:27:18 running-upgrade-314000 kubelet[12597]: I0918 20:27:18.190457   12597 topology_manager.go:200] "Topology Admit Handler"
	Sep 18 20:27:18 running-upgrade-314000 kubelet[12597]: I0918 20:27:18.196774   12597 topology_manager.go:200] "Topology Admit Handler"
	Sep 18 20:27:18 running-upgrade-314000 kubelet[12597]: I0918 20:27:18.243515   12597 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="37d92f379ae324c44d5ba77d0bb016a2482bc573b624c0f30122046174829465"
	Sep 18 20:27:18 running-upgrade-314000 kubelet[12597]: I0918 20:27:18.308317   12597 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm724\" (UniqueName: \"kubernetes.io/projected/26209ec7-ed40-4a43-be13-ce9159b80b47-kube-api-access-fm724\") pod \"coredns-6d4b75cb6d-bqt8h\" (UID: \"26209ec7-ed40-4a43-be13-ce9159b80b47\") " pod="kube-system/coredns-6d4b75cb6d-bqt8h"
	Sep 18 20:27:18 running-upgrade-314000 kubelet[12597]: I0918 20:27:18.308351   12597 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9a994ef-4b94-45fb-aa1d-7dee2adc5ca7-config-volume\") pod \"coredns-6d4b75cb6d-g4hgj\" (UID: \"a9a994ef-4b94-45fb-aa1d-7dee2adc5ca7\") " pod="kube-system/coredns-6d4b75cb6d-g4hgj"
	Sep 18 20:27:18 running-upgrade-314000 kubelet[12597]: I0918 20:27:18.308363   12597 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd589\" (UniqueName: \"kubernetes.io/projected/a9a994ef-4b94-45fb-aa1d-7dee2adc5ca7-kube-api-access-wd589\") pod \"coredns-6d4b75cb6d-g4hgj\" (UID: \"a9a994ef-4b94-45fb-aa1d-7dee2adc5ca7\") " pod="kube-system/coredns-6d4b75cb6d-g4hgj"
	Sep 18 20:27:18 running-upgrade-314000 kubelet[12597]: I0918 20:27:18.308386   12597 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26209ec7-ed40-4a43-be13-ce9159b80b47-config-volume\") pod \"coredns-6d4b75cb6d-bqt8h\" (UID: \"26209ec7-ed40-4a43-be13-ce9159b80b47\") " pod="kube-system/coredns-6d4b75cb6d-bqt8h"
	Sep 18 20:27:19 running-upgrade-314000 kubelet[12597]: I0918 20:27:19.260113   12597 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c98325dd7d1f56288ce4ecf9e41e2267a2ddc1c8b2fdb8ed1d4c83d3abd895f7"
	Sep 18 20:31:06 running-upgrade-314000 kubelet[12597]: I0918 20:31:06.430211   12597 scope.go:110] "RemoveContainer" containerID="90c2345e4c409aec45544bad98219b9ad4f1c3272561ad042d95d58cad83cf76"
	Sep 18 20:31:06 running-upgrade-314000 kubelet[12597]: I0918 20:31:06.449306   12597 scope.go:110] "RemoveContainer" containerID="0c0e8c82d0b72b547089701a7ac9428870b03a270a5c29fb4d1bed5eea95bb9b"
	
	
	==> storage-provisioner [617f27d98fd4] <==
	I0918 20:27:18.301870       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 20:27:18.306173       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 20:27:18.306190       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 20:27:18.309757       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 20:27:18.309901       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-314000_b0ae7988-2009-4259-95f5-e3c90cc9c663!
	I0918 20:27:18.310134       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"951911d7-9490-495f-9a10-d801195a98ec", APIVersion:"v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-314000_b0ae7988-2009-4259-95f5-e3c90cc9c663 became leader
	I0918 20:27:18.410869       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-314000_b0ae7988-2009-4259-95f5-e3c90cc9c663!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-314000 -n running-upgrade-314000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-314000 -n running-upgrade-314000: exit status 2 (15.652443958s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-314000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-314000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-314000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-314000: (1.3674645s)
--- FAIL: TestRunningBinaryUpgrade (610.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (21.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-593000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-593000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (12.549298791s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-593000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-593000" primary control-plane node in "kubernetes-upgrade-593000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-593000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:21:01.772006    3777 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:21:01.772162    3777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:21:01.772165    3777 out.go:358] Setting ErrFile to fd 2...
	I0918 13:21:01.772167    3777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:21:01.772321    3777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:21:01.773702    3777 out.go:352] Setting JSON to false
	I0918 13:21:01.793017    3777 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3020,"bootTime":1726687841,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:21:01.793138    3777 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:21:01.802494    3777 out.go:177] * [kubernetes-upgrade-593000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:21:01.812777    3777 notify.go:220] Checking for updates...
	I0918 13:21:01.819422    3777 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:21:01.825102    3777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:21:01.831467    3777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:21:01.839006    3777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:21:01.846457    3777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:21:01.858432    3777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:21:01.863018    3777 config.go:182] Loaded profile config "NoKubernetes-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:21:01.863098    3777 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:21:01.863161    3777 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:21:01.866413    3777 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:21:01.874403    3777 start.go:297] selected driver: qemu2
	I0918 13:21:01.874410    3777 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:21:01.874417    3777 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:21:01.877363    3777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:21:01.881500    3777 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:21:01.886512    3777 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 13:21:01.886530    3777 cni.go:84] Creating CNI manager for ""
	I0918 13:21:01.886562    3777 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 13:21:01.886598    3777 start.go:340] cluster config:
	{Name:kubernetes-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:21:01.891696    3777 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:21:01.897301    3777 out.go:177] * Starting "kubernetes-upgrade-593000" primary control-plane node in "kubernetes-upgrade-593000" cluster
	I0918 13:21:01.902456    3777 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 13:21:01.902492    3777 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0918 13:21:01.902509    3777 cache.go:56] Caching tarball of preloaded images
	I0918 13:21:01.902616    3777 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:21:01.902623    3777 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0918 13:21:01.902702    3777 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/kubernetes-upgrade-593000/config.json ...
	I0918 13:21:01.902714    3777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/kubernetes-upgrade-593000/config.json: {Name:mk3bca312df4200dbfc522abd20387027c022cbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:21:01.903006    3777 start.go:360] acquireMachinesLock for kubernetes-upgrade-593000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:21:04.102809    3777 start.go:364] duration metric: took 2.199759833s to acquireMachinesLock for "kubernetes-upgrade-593000"
	I0918 13:21:04.103030    3777 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:21:04.103288    3777 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:21:04.114530    3777 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:21:04.167834    3777 start.go:159] libmachine.API.Create for "kubernetes-upgrade-593000" (driver="qemu2")
	I0918 13:21:04.167900    3777 client.go:168] LocalClient.Create starting
	I0918 13:21:04.167999    3777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:21:04.168059    3777 main.go:141] libmachine: Decoding PEM data...
	I0918 13:21:04.168076    3777 main.go:141] libmachine: Parsing certificate...
	I0918 13:21:04.168137    3777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:21:04.168181    3777 main.go:141] libmachine: Decoding PEM data...
	I0918 13:21:04.168206    3777 main.go:141] libmachine: Parsing certificate...
	I0918 13:21:04.168864    3777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:21:04.427369    3777 main.go:141] libmachine: Creating SSH key...
	I0918 13:21:04.608355    3777 main.go:141] libmachine: Creating Disk image...
	I0918 13:21:04.608366    3777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:21:04.608565    3777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2
	I0918 13:21:04.618507    3777 main.go:141] libmachine: STDOUT: 
	I0918 13:21:04.618528    3777 main.go:141] libmachine: STDERR: 
	I0918 13:21:04.618582    3777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2 +20000M
	I0918 13:21:04.626697    3777 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:21:04.626717    3777 main.go:141] libmachine: STDERR: 
	I0918 13:21:04.626730    3777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2
	I0918 13:21:04.626734    3777 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:21:04.626746    3777 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:21:04.626781    3777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:54:af:7b:a6:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2
	I0918 13:21:04.628472    3777 main.go:141] libmachine: STDOUT: 
	I0918 13:21:04.628486    3777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:21:04.628508    3777 client.go:171] duration metric: took 460.612375ms to LocalClient.Create
	I0918 13:21:06.630641    3777 start.go:128] duration metric: took 2.527386s to createHost
	I0918 13:21:06.630761    3777 start.go:83] releasing machines lock for "kubernetes-upgrade-593000", held for 2.527867959s
	W0918 13:21:06.630832    3777 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:21:06.650429    3777 out.go:177] * Deleting "kubernetes-upgrade-593000" in qemu2 ...
	W0918 13:21:06.683573    3777 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:21:06.683595    3777 start.go:729] Will try again in 5 seconds ...
	I0918 13:21:11.685715    3777 start.go:360] acquireMachinesLock for kubernetes-upgrade-593000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:21:11.691058    3777 start.go:364] duration metric: took 5.187375ms to acquireMachinesLock for "kubernetes-upgrade-593000"
	I0918 13:21:11.691124    3777 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:21:11.691383    3777 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:21:11.702992    3777 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:21:11.755838    3777 start.go:159] libmachine.API.Create for "kubernetes-upgrade-593000" (driver="qemu2")
	I0918 13:21:11.755877    3777 client.go:168] LocalClient.Create starting
	I0918 13:21:11.755965    3777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:21:11.756030    3777 main.go:141] libmachine: Decoding PEM data...
	I0918 13:21:11.756047    3777 main.go:141] libmachine: Parsing certificate...
	I0918 13:21:11.756113    3777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:21:11.756159    3777 main.go:141] libmachine: Decoding PEM data...
	I0918 13:21:11.756173    3777 main.go:141] libmachine: Parsing certificate...
	I0918 13:21:11.756661    3777 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:21:12.018465    3777 main.go:141] libmachine: Creating SSH key...
	I0918 13:21:12.224432    3777 main.go:141] libmachine: Creating Disk image...
	I0918 13:21:12.224446    3777 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:21:12.224673    3777 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2
	I0918 13:21:12.234491    3777 main.go:141] libmachine: STDOUT: 
	I0918 13:21:12.234520    3777 main.go:141] libmachine: STDERR: 
	I0918 13:21:12.234595    3777 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2 +20000M
	I0918 13:21:12.242546    3777 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:21:12.242562    3777 main.go:141] libmachine: STDERR: 
	I0918 13:21:12.242577    3777 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2
	I0918 13:21:12.242580    3777 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:21:12.242589    3777 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:21:12.242626    3777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:df:71:79:fe:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2
	I0918 13:21:12.244253    3777 main.go:141] libmachine: STDOUT: 
	I0918 13:21:12.244270    3777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:21:12.244284    3777 client.go:171] duration metric: took 488.415042ms to LocalClient.Create
	I0918 13:21:14.246412    3777 start.go:128] duration metric: took 2.555056875s to createHost
	I0918 13:21:14.246476    3777 start.go:83] releasing machines lock for "kubernetes-upgrade-593000", held for 2.555460167s
	W0918 13:21:14.246887    3777 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-593000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-593000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:21:14.259536    3777 out.go:201] 
	W0918 13:21:14.263565    3777 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:21:14.263597    3777 out.go:270] * 
	* 
	W0918 13:21:14.266356    3777 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:21:14.276234    3777 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-593000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-593000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-593000: (4.010186083s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-593000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-593000 status --format={{.Host}}: exit status 7 (63.426792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-593000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-593000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.179318667s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-593000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-593000" primary control-plane node in "kubernetes-upgrade-593000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-593000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-593000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:21:18.395962    3837 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:21:18.396089    3837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:21:18.396092    3837 out.go:358] Setting ErrFile to fd 2...
	I0918 13:21:18.396095    3837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:21:18.396248    3837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:21:18.397273    3837 out.go:352] Setting JSON to false
	I0918 13:21:18.413070    3837 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3037,"bootTime":1726687841,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:21:18.413150    3837 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:21:18.417222    3837 out.go:177] * [kubernetes-upgrade-593000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:21:18.422931    3837 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:21:18.422984    3837 notify.go:220] Checking for updates...
	I0918 13:21:18.429890    3837 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:21:18.432923    3837 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:21:18.435914    3837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:21:18.438848    3837 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:21:18.441906    3837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:21:18.445257    3837 config.go:182] Loaded profile config "kubernetes-upgrade-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0918 13:21:18.445580    3837 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:21:18.449846    3837 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:21:18.456916    3837 start.go:297] selected driver: qemu2
	I0918 13:21:18.456923    3837 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:21:18.456980    3837 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:21:18.459288    3837 cni.go:84] Creating CNI manager for ""
	I0918 13:21:18.459326    3837 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:21:18.459356    3837 start.go:340] cluster config:
	{Name:kubernetes-upgrade-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-593000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:21:18.462898    3837 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:21:18.469868    3837 out.go:177] * Starting "kubernetes-upgrade-593000" primary control-plane node in "kubernetes-upgrade-593000" cluster
	I0918 13:21:18.473736    3837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:21:18.473756    3837 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:21:18.473767    3837 cache.go:56] Caching tarball of preloaded images
	I0918 13:21:18.473839    3837 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:21:18.473845    3837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:21:18.473920    3837 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/kubernetes-upgrade-593000/config.json ...
	I0918 13:21:18.474386    3837 start.go:360] acquireMachinesLock for kubernetes-upgrade-593000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:21:18.474415    3837 start.go:364] duration metric: took 22.792µs to acquireMachinesLock for "kubernetes-upgrade-593000"
	I0918 13:21:18.474423    3837 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:21:18.474430    3837 fix.go:54] fixHost starting: 
	I0918 13:21:18.474548    3837 fix.go:112] recreateIfNeeded on kubernetes-upgrade-593000: state=Stopped err=<nil>
	W0918 13:21:18.474557    3837 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:21:18.478869    3837 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-593000" ...
	I0918 13:21:18.482742    3837 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:21:18.482792    3837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:df:71:79:fe:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2
	I0918 13:21:18.484964    3837 main.go:141] libmachine: STDOUT: 
	I0918 13:21:18.484986    3837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:21:18.485021    3837 fix.go:56] duration metric: took 10.591084ms for fixHost
	I0918 13:21:18.485027    3837 start.go:83] releasing machines lock for "kubernetes-upgrade-593000", held for 10.608375ms
	W0918 13:21:18.485033    3837 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:21:18.485071    3837 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:21:18.485076    3837 start.go:729] Will try again in 5 seconds ...
	I0918 13:21:23.487131    3837 start.go:360] acquireMachinesLock for kubernetes-upgrade-593000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:21:23.487657    3837 start.go:364] duration metric: took 376.208µs to acquireMachinesLock for "kubernetes-upgrade-593000"
	I0918 13:21:23.487754    3837 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:21:23.487775    3837 fix.go:54] fixHost starting: 
	I0918 13:21:23.488593    3837 fix.go:112] recreateIfNeeded on kubernetes-upgrade-593000: state=Stopped err=<nil>
	W0918 13:21:23.488618    3837 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:21:23.493040    3837 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-593000" ...
	I0918 13:21:23.501071    3837 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:21:23.501302    3837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:df:71:79:fe:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubernetes-upgrade-593000/disk.qcow2
	I0918 13:21:23.510836    3837 main.go:141] libmachine: STDOUT: 
	I0918 13:21:23.510911    3837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:21:23.510993    3837 fix.go:56] duration metric: took 23.214583ms for fixHost
	I0918 13:21:23.511041    3837 start.go:83] releasing machines lock for "kubernetes-upgrade-593000", held for 23.338541ms
	W0918 13:21:23.511280    3837 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-593000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-593000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:21:23.519016    3837 out.go:201] 
	W0918 13:21:23.522041    3837 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:21:23.522067    3837 out.go:270] * 
	* 
	W0918 13:21:23.524621    3837 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:21:23.531965    3837 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-593000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-593000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-593000 version --output=json: exit status 1 (65.827666ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-593000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-18 13:21:23.61286 -0700 PDT m=+2654.699662418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-593000 -n kubernetes-upgrade-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-593000 -n kubernetes-upgrade-593000: exit status 7 (33.0415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-593000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-593000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-593000
--- FAIL: TestKubernetesUpgrade (21.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (12.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-748000 --driver=qemu2 
E0918 13:20:56.316027    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-748000 --driver=qemu2 : exit status 80 (12.577743375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-748000" primary control-plane node in "NoKubernetes-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-748000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-748000 -n NoKubernetes-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-748000 -n NoKubernetes-748000: exit status 7 (51.694ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-748000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (12.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-748000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-748000 --no-kubernetes --driver=qemu2 : exit status 80 (7.53488525s)

                                                
                                                
-- stdout --
	* [NoKubernetes-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-748000
	* Restarting existing qemu2 VM for "NoKubernetes-748000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-748000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-748000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-748000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-748000 -n NoKubernetes-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-748000 -n NoKubernetes-748000: exit status 7 (51.677917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-748000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-748000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-748000 --no-kubernetes --driver=qemu2 : exit status 80 (7.550872s)

                                                
                                                
-- stdout --
	* [NoKubernetes-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-748000
	* Restarting existing qemu2 VM for "NoKubernetes-748000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-748000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-748000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-748000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-748000 -n NoKubernetes-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-748000 -n NoKubernetes-748000: exit status 7 (70.15825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-748000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (7.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-748000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-748000 --driver=qemu2 : exit status 80 (5.2788785s)

                                                
                                                
-- stdout --
	* [NoKubernetes-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-748000
	* Restarting existing qemu2 VM for "NoKubernetes-748000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-748000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-748000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-748000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-748000 -n NoKubernetes-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-748000 -n NoKubernetes-748000: exit status 7 (70.929167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-748000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (606.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3191342934 start -p stopped-upgrade-367000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3191342934 start -p stopped-upgrade-367000 --memory=2200 --vm-driver=qemu2 : (1m14.505490084s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3191342934 -p stopped-upgrade-367000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3191342934 -p stopped-upgrade-367000 stop: (12.120405s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-367000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0918 13:24:27.281685    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:25:56.307781    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:28:59.400186    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:29:27.274162    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:30:56.299873    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-367000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.284795542s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-367000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-367000" primary control-plane node in "stopped-upgrade-367000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-367000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:22:51.424511    3992 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:22:51.424662    3992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:22:51.424666    3992 out.go:358] Setting ErrFile to fd 2...
	I0918 13:22:51.424669    3992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:22:51.424840    3992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:22:51.426050    3992 out.go:352] Setting JSON to false
	I0918 13:22:51.445059    3992 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3130,"bootTime":1726687841,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:22:51.445127    3992 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:22:51.449630    3992 out.go:177] * [stopped-upgrade-367000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:22:51.465118    3992 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:22:51.465173    3992 notify.go:220] Checking for updates...
	I0918 13:22:51.472655    3992 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:22:51.475599    3992 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:22:51.478595    3992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:22:51.481631    3992 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:22:51.483082    3992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:22:51.486914    3992 config.go:182] Loaded profile config "stopped-upgrade-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0918 13:22:51.490586    3992 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 13:22:51.493577    3992 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:22:51.497582    3992 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:22:51.504573    3992 start.go:297] selected driver: qemu2
	I0918 13:22:51.504581    3992 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50335 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0918 13:22:51.504651    3992 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:22:51.507421    3992 cni.go:84] Creating CNI manager for ""
	I0918 13:22:51.507458    3992 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:22:51.507488    3992 start.go:340] cluster config:
	{Name:stopped-upgrade-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50335 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0918 13:22:51.507552    3992 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:22:51.515630    3992 out.go:177] * Starting "stopped-upgrade-367000" primary control-plane node in "stopped-upgrade-367000" cluster
	I0918 13:22:51.519599    3992 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0918 13:22:51.519617    3992 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0918 13:22:51.519626    3992 cache.go:56] Caching tarball of preloaded images
	I0918 13:22:51.519694    3992 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:22:51.519700    3992 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0918 13:22:51.519761    3992 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/config.json ...
	I0918 13:22:51.520283    3992 start.go:360] acquireMachinesLock for stopped-upgrade-367000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:22:51.520322    3992 start.go:364] duration metric: took 32.542µs to acquireMachinesLock for "stopped-upgrade-367000"
	I0918 13:22:51.520331    3992 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:22:51.520340    3992 fix.go:54] fixHost starting: 
	I0918 13:22:51.520463    3992 fix.go:112] recreateIfNeeded on stopped-upgrade-367000: state=Stopped err=<nil>
	W0918 13:22:51.520472    3992 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:22:51.524599    3992 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-367000" ...
	I0918 13:22:51.532520    3992 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:22:51.532605    3992 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50257-:22,hostfwd=tcp::50258-:2376,hostname=stopped-upgrade-367000 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/disk.qcow2
	I0918 13:22:51.575957    3992 main.go:141] libmachine: STDOUT: 
	I0918 13:22:51.575986    3992 main.go:141] libmachine: STDERR: 
	I0918 13:22:51.575993    3992 main.go:141] libmachine: Waiting for VM to start (ssh -p 50257 docker@127.0.0.1)...
	I0918 13:23:11.355424    3992 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/config.json ...
	I0918 13:23:11.355885    3992 machine.go:93] provisionDockerMachine start ...
	I0918 13:23:11.356065    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.356348    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.356359    3992 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 13:23:11.432782    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0918 13:23:11.432797    3992 buildroot.go:166] provisioning hostname "stopped-upgrade-367000"
	I0918 13:23:11.432878    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.433036    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.433044    3992 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-367000 && echo "stopped-upgrade-367000" | sudo tee /etc/hostname
	I0918 13:23:11.503725    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-367000
	
	I0918 13:23:11.503779    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.503884    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.503892    3992 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-367000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-367000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-367000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 13:23:11.570087    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 13:23:11.570101    3992 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19667-1040/.minikube CaCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19667-1040/.minikube}
	I0918 13:23:11.570110    3992 buildroot.go:174] setting up certificates
	I0918 13:23:11.570114    3992 provision.go:84] configureAuth start
	I0918 13:23:11.570121    3992 provision.go:143] copyHostCerts
	I0918 13:23:11.570186    3992 exec_runner.go:144] found /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.pem, removing ...
	I0918 13:23:11.570194    3992 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.pem
	I0918 13:23:11.570546    3992 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.pem (1082 bytes)
	I0918 13:23:11.570722    3992 exec_runner.go:144] found /Users/jenkins/minikube-integration/19667-1040/.minikube/cert.pem, removing ...
	I0918 13:23:11.570726    3992 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19667-1040/.minikube/cert.pem
	I0918 13:23:11.570785    3992 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/cert.pem (1123 bytes)
	I0918 13:23:11.570881    3992 exec_runner.go:144] found /Users/jenkins/minikube-integration/19667-1040/.minikube/key.pem, removing ...
	I0918 13:23:11.570886    3992 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19667-1040/.minikube/key.pem
	I0918 13:23:11.570930    3992 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19667-1040/.minikube/key.pem (1679 bytes)
	I0918 13:23:11.571011    3992 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-367000 san=[127.0.0.1 localhost minikube stopped-upgrade-367000]
	I0918 13:23:11.690341    3992 provision.go:177] copyRemoteCerts
	I0918 13:23:11.690385    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 13:23:11.690395    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	I0918 13:23:11.725329    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 13:23:11.731776    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 13:23:11.738825    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 13:23:11.746194    3992 provision.go:87] duration metric: took 176.074458ms to configureAuth
	I0918 13:23:11.746203    3992 buildroot.go:189] setting minikube options for container-runtime
	I0918 13:23:11.746308    3992 config.go:182] Loaded profile config "stopped-upgrade-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0918 13:23:11.746342    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.746432    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.746437    3992 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 13:23:11.811870    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0918 13:23:11.811880    3992 buildroot.go:70] root file system type: tmpfs
	I0918 13:23:11.811929    3992 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 13:23:11.811996    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.812114    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.812147    3992 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 13:23:11.880743    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 13:23:11.880812    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:11.880930    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:11.880939    3992 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 13:23:12.233998    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0918 13:23:12.234013    3992 machine.go:96] duration metric: took 878.141292ms to provisionDockerMachine
	I0918 13:23:12.234019    3992 start.go:293] postStartSetup for "stopped-upgrade-367000" (driver="qemu2")
	I0918 13:23:12.234025    3992 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 13:23:12.234079    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 13:23:12.234087    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	I0918 13:23:12.269657    3992 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 13:23:12.270864    3992 info.go:137] Remote host: Buildroot 2021.02.12
	I0918 13:23:12.270872    3992 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19667-1040/.minikube/addons for local assets ...
	I0918 13:23:12.270958    3992 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19667-1040/.minikube/files for local assets ...
	I0918 13:23:12.271084    3992 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem -> 15162.pem in /etc/ssl/certs
	I0918 13:23:12.271206    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 13:23:12.274208    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem --> /etc/ssl/certs/15162.pem (1708 bytes)
	I0918 13:23:12.281601    3992 start.go:296] duration metric: took 47.577083ms for postStartSetup
	I0918 13:23:12.281617    3992 fix.go:56] duration metric: took 20.761826459s for fixHost
	I0918 13:23:12.281666    3992 main.go:141] libmachine: Using SSH client type: native
	I0918 13:23:12.281780    3992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10282d190] 0x10282f9d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0918 13:23:12.281789    3992 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0918 13:23:12.348653    3992 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726690992.045376212
	
	I0918 13:23:12.348664    3992 fix.go:216] guest clock: 1726690992.045376212
	I0918 13:23:12.348668    3992 fix.go:229] Guest: 2024-09-18 13:23:12.045376212 -0700 PDT Remote: 2024-09-18 13:23:12.281619 -0700 PDT m=+20.887712293 (delta=-236.242788ms)
	I0918 13:23:12.348684    3992 fix.go:200] guest clock delta is within tolerance: -236.242788ms
	I0918 13:23:12.348687    3992 start.go:83] releasing machines lock for "stopped-upgrade-367000", held for 20.828906083s
	I0918 13:23:12.348769    3992 ssh_runner.go:195] Run: cat /version.json
	I0918 13:23:12.348771    3992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 13:23:12.348777    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	I0918 13:23:12.348787    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	W0918 13:23:12.349509    3992 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50543->127.0.0.1:50257: write: broken pipe
	I0918 13:23:12.349527    3992 retry.go:31] will retry after 212.276912ms: ssh: handshake failed: write tcp 127.0.0.1:50543->127.0.0.1:50257: write: broken pipe
	W0918 13:23:12.383571    3992 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0918 13:23:12.383637    3992 ssh_runner.go:195] Run: systemctl --version
	I0918 13:23:12.385717    3992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0918 13:23:12.387398    3992 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0918 13:23:12.387436    3992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0918 13:23:12.390943    3992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0918 13:23:12.396675    3992 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 13:23:12.396700    3992 start.go:495] detecting cgroup driver to use...
	I0918 13:23:12.396777    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 13:23:12.403985    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0918 13:23:12.407380    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 13:23:12.410848    3992 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 13:23:12.410913    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 13:23:12.414648    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 13:23:12.418408    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 13:23:12.421874    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 13:23:12.425244    3992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 13:23:12.428834    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 13:23:12.432267    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0918 13:23:12.435427    3992 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0918 13:23:12.439124    3992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 13:23:12.442356    3992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 13:23:12.445373    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:12.523311    3992 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 13:23:12.534315    3992 start.go:495] detecting cgroup driver to use...
	I0918 13:23:12.534386    3992 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 13:23:12.539172    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 13:23:12.544112    3992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 13:23:12.554233    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 13:23:12.559317    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 13:23:12.564489    3992 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0918 13:23:12.602236    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 13:23:12.640023    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 13:23:12.645675    3992 ssh_runner.go:195] Run: which cri-dockerd
	I0918 13:23:12.646915    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 13:23:12.649951    3992 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0918 13:23:12.654944    3992 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 13:23:12.740580    3992 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 13:23:12.826880    3992 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 13:23:12.826941    3992 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0918 13:23:12.832323    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:12.906921    3992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 13:23:14.025103    3992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.118194208s)
	I0918 13:23:14.025175    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0918 13:23:14.031260    3992 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0918 13:23:14.037047    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 13:23:14.041782    3992 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 13:23:14.115953    3992 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 13:23:14.191517    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:14.251915    3992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 13:23:14.258424    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 13:23:14.262887    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:14.329393    3992 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0918 13:23:14.373636    3992 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 13:23:14.373723    3992 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 13:23:14.377157    3992 start.go:563] Will wait 60s for crictl version
	I0918 13:23:14.377223    3992 ssh_runner.go:195] Run: which crictl
	I0918 13:23:14.378603    3992 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 13:23:14.393629    3992 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0918 13:23:14.393735    3992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 13:23:14.412438    3992 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 13:23:14.431614    3992 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0918 13:23:14.431770    3992 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0918 13:23:14.433010    3992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 13:23:14.436405    3992 kubeadm.go:883] updating cluster {Name:stopped-upgrade-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50335 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0918 13:23:14.436455    3992 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0918 13:23:14.436515    3992 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 13:23:14.447684    3992 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 13:23:14.447696    3992 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0918 13:23:14.447750    3992 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 13:23:14.451588    3992 ssh_runner.go:195] Run: which lz4
	I0918 13:23:14.453316    3992 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0918 13:23:14.454674    3992 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 13:23:14.454698    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0918 13:23:15.439085    3992 docker.go:649] duration metric: took 985.855583ms to copy over tarball
	I0918 13:23:15.439155    3992 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 13:23:16.601606    3992 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162461875s)
	I0918 13:23:16.601620    3992 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 13:23:16.617652    3992 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0918 13:23:16.620817    3992 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0918 13:23:16.625860    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:16.704563    3992 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 13:23:18.404206    3992 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.699671417s)
	I0918 13:23:18.404308    3992 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 13:23:18.418307    3992 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 13:23:18.418319    3992 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0918 13:23:18.418326    3992 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 13:23:18.422241    3992 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0918 13:23:18.425064    3992 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:23:18.427779    3992 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:23:18.427930    3992 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0918 13:23:18.429919    3992 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:23:18.430008    3992 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:23:18.431638    3992 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:23:18.431758    3992 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:23:18.433468    3992 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:23:18.433822    3992 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:23:18.434942    3992 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:23:18.435440    3992 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:23:18.436786    3992 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:23:18.436821    3992 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:23:18.438740    3992 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:23:18.439944    3992 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:23:18.806151    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0918 13:23:18.817383    3992 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0918 13:23:18.817411    3992 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0918 13:23:18.817480    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0918 13:23:18.827069    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0918 13:23:18.827192    3992 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0918 13:23:18.828836    3992 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0918 13:23:18.828847    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0918 13:23:18.832887    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:23:18.838251    3992 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0918 13:23:18.838265    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0918 13:23:18.838737    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0918 13:23:18.846330    3992 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0918 13:23:18.846352    3992 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:23:18.846423    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0918 13:23:18.868823    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W0918 13:23:18.878834    3992 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0918 13:23:18.878972    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:23:18.880770    3992 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0918 13:23:18.880814    3992 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0918 13:23:18.880821    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0918 13:23:18.880834    3992 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0918 13:23:18.880877    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0918 13:23:18.885685    3992 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0918 13:23:18.885703    3992 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:23:18.885768    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0918 13:23:18.895998    3992 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0918 13:23:18.896020    3992 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:23:18.896091    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0918 13:23:18.898353    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0918 13:23:18.911870    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:23:18.916899    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0918 13:23:18.917870    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0918 13:23:18.917983    3992 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0918 13:23:18.926123    3992 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0918 13:23:18.926148    3992 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:23:18.926161    3992 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0918 13:23:18.926182    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0918 13:23:18.926219    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0918 13:23:18.945430    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:23:18.965295    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0918 13:23:18.969440    3992 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0918 13:23:18.969452    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0918 13:23:18.977118    3992 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0918 13:23:18.977143    3992 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:23:18.977210    3992 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0918 13:23:19.013648    3992 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0918 13:23:19.013680    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0918 13:23:19.289299    3992 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0918 13:23:19.289661    3992 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:23:19.317494    3992 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0918 13:23:19.317551    3992 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:23:19.317682    3992 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:23:19.340927    3992 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 13:23:19.341074    3992 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0918 13:23:19.342782    3992 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0918 13:23:19.342797    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0918 13:23:19.372231    3992 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0918 13:23:19.372245    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0918 13:23:19.612650    3992 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0918 13:23:19.612689    3992 cache_images.go:92] duration metric: took 1.194384667s to LoadCachedImages
	W0918 13:23:19.612726    3992 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0918 13:23:19.612735    3992 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0918 13:23:19.612793    3992 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-367000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 13:23:19.612868    3992 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 13:23:19.626171    3992 cni.go:84] Creating CNI manager for ""
	I0918 13:23:19.626182    3992 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:23:19.626189    3992 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 13:23:19.626199    3992 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-367000 NodeName:stopped-upgrade-367000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 13:23:19.626268    3992 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-367000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 13:23:19.626329    3992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0918 13:23:19.629719    3992 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 13:23:19.629755    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 13:23:19.632299    3992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0918 13:23:19.637098    3992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 13:23:19.642145    3992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0918 13:23:19.647532    3992 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0918 13:23:19.648858    3992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 13:23:19.652267    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:23:19.730801    3992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 13:23:19.737280    3992 certs.go:68] Setting up /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000 for IP: 10.0.2.15
	I0918 13:23:19.737289    3992 certs.go:194] generating shared ca certs ...
	I0918 13:23:19.737299    3992 certs.go:226] acquiring lock for ca certs: {Name:mk6bf733e3b7a8269fa0cc74c7cf113ceab149df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:23:19.737512    3992 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key
	I0918 13:23:19.737551    3992 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key
	I0918 13:23:19.737559    3992 certs.go:256] generating profile certs ...
	I0918 13:23:19.737649    3992 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.key
	I0918 13:23:19.737668    3992 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key.f132c78f
	I0918 13:23:19.737689    3992 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt.f132c78f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0918 13:23:19.966707    3992 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt.f132c78f ...
	I0918 13:23:19.966723    3992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt.f132c78f: {Name:mke4091d5b8545646fea833379b021649e2b0bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:23:19.968287    3992 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key.f132c78f ...
	I0918 13:23:19.968295    3992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key.f132c78f: {Name:mkb798a6a3d753260ffed16c1ed60a7be2f3fb02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:23:19.969191    3992 certs.go:381] copying /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt.f132c78f -> /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt
	I0918 13:23:19.969388    3992 certs.go:385] copying /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key.f132c78f -> /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key
	I0918 13:23:19.969560    3992 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/proxy-client.key
	I0918 13:23:19.969708    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516.pem (1338 bytes)
	W0918 13:23:19.969733    3992 certs.go:480] ignoring /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516_empty.pem, impossibly tiny 0 bytes
	I0918 13:23:19.969738    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 13:23:19.969759    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem (1082 bytes)
	I0918 13:23:19.969778    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem (1123 bytes)
	I0918 13:23:19.969801    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/key.pem (1679 bytes)
	I0918 13:23:19.969842    3992 certs.go:484] found cert: /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem (1708 bytes)
	I0918 13:23:19.970188    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 13:23:19.977470    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 13:23:19.984035    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 13:23:19.990960    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0918 13:23:19.998430    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 13:23:20.006013    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 13:23:20.013341    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 13:23:20.020061    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 13:23:20.026994    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 13:23:20.034204    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/1516.pem --> /usr/share/ca-certificates/1516.pem (1338 bytes)
	I0918 13:23:20.041279    3992 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/ssl/certs/15162.pem --> /usr/share/ca-certificates/15162.pem (1708 bytes)
	I0918 13:23:20.047849    3992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 13:23:20.052866    3992 ssh_runner.go:195] Run: openssl version
	I0918 13:23:20.054762    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 13:23:20.058436    3992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:23:20.060012    3992 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:23:20.060037    3992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 13:23:20.061833    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 13:23:20.064736    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1516.pem && ln -fs /usr/share/ca-certificates/1516.pem /etc/ssl/certs/1516.pem"
	I0918 13:23:20.067550    3992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1516.pem
	I0918 13:23:20.069008    3992 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:53 /usr/share/ca-certificates/1516.pem
	I0918 13:23:20.069034    3992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1516.pem
	I0918 13:23:20.070710    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1516.pem /etc/ssl/certs/51391683.0"
	I0918 13:23:20.074099    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15162.pem && ln -fs /usr/share/ca-certificates/15162.pem /etc/ssl/certs/15162.pem"
	I0918 13:23:20.077198    3992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15162.pem
	I0918 13:23:20.078525    3992 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:53 /usr/share/ca-certificates/15162.pem
	I0918 13:23:20.078544    3992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15162.pem
	I0918 13:23:20.080403    3992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15162.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 13:23:20.083395    3992 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 13:23:20.084943    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 13:23:20.086774    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 13:23:20.088656    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 13:23:20.090659    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 13:23:20.092494    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 13:23:20.094296    3992 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 13:23:20.096295    3992 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50335 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0918 13:23:20.096375    3992 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 13:23:20.106326    3992 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 13:23:20.109857    3992 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 13:23:20.109868    3992 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 13:23:20.109897    3992 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 13:23:20.112448    3992 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 13:23:20.112736    3992 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-367000" does not appear in /Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:23:20.112831    3992 kubeconfig.go:62] /Users/jenkins/minikube-integration/19667-1040/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-367000" cluster setting kubeconfig missing "stopped-upgrade-367000" context setting]
	I0918 13:23:20.113664    3992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/kubeconfig: {Name:mkc39e19086c32e3258f75506afcbcc582926b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:23:20.114613    3992 kapi.go:59] client config for stopped-upgrade-367000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.key", CAFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e05800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 13:23:20.114944    3992 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 13:23:20.117943    3992 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-367000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0918 13:23:20.117948    3992 kubeadm.go:1160] stopping kube-system containers ...
	I0918 13:23:20.117998    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 13:23:20.128716    3992 docker.go:483] Stopping containers: [17f70e497468 7337a97ddd7b 014c9f589a4f 56f7c42e2286 f2971c3f4847 2d9c69459424 5d7f652712f1 b19830618519]
	I0918 13:23:20.128806    3992 ssh_runner.go:195] Run: docker stop 17f70e497468 7337a97ddd7b 014c9f589a4f 56f7c42e2286 f2971c3f4847 2d9c69459424 5d7f652712f1 b19830618519
	I0918 13:23:20.139397    3992 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0918 13:23:20.144925    3992 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 13:23:20.147609    3992 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 13:23:20.147615    3992 kubeadm.go:157] found existing configuration files:
	
	I0918 13:23:20.147638    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/admin.conf
	I0918 13:23:20.150524    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 13:23:20.150550    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 13:23:20.153361    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/kubelet.conf
	I0918 13:23:20.155769    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 13:23:20.155799    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 13:23:20.158843    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/controller-manager.conf
	I0918 13:23:20.161947    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 13:23:20.161971    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 13:23:20.164635    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/scheduler.conf
	I0918 13:23:20.167153    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 13:23:20.167174    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 13:23:20.170066    3992 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 13:23:20.172640    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:23:20.195852    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:23:20.529646    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:23:20.664669    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:23:20.687390    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0918 13:23:20.715416    3992 api_server.go:52] waiting for apiserver process to appear ...
	I0918 13:23:20.715495    3992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:23:21.217601    3992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:23:21.716983    3992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:23:21.721781    3992 api_server.go:72] duration metric: took 1.006391958s to wait for apiserver process to appear ...
	I0918 13:23:21.721792    3992 api_server.go:88] waiting for apiserver healthz status ...
	I0918 13:23:21.721802    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:26.723778    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:26.723816    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:31.724327    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:31.724373    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:36.724827    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:36.724881    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:41.725839    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:41.725878    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:46.726651    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:46.726684    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:51.727731    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:51.727783    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:23:56.729230    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:23:56.729273    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:01.731078    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:01.731122    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:06.733364    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:06.733414    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:11.735619    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:11.735662    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:16.738004    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:16.738102    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:21.739257    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:21.739530    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:21.761221    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:24:21.761349    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:21.778666    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:24:21.778770    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:21.790966    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:24:21.791053    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:21.802245    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:24:21.802329    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:21.812733    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:24:21.812812    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:21.827925    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:24:21.828012    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:21.838677    3992 logs.go:276] 0 containers: []
	W0918 13:24:21.838690    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:21.838754    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:21.848669    3992 logs.go:276] 0 containers: []
	W0918 13:24:21.848680    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:24:21.848697    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:21.848703    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:21.886366    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:21.886375    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:21.890351    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:24:21.890357    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:24:21.904564    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:24:21.904577    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:24:21.920475    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:24:21.920485    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:24:21.937694    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:21.937703    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:22.016840    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:24:22.016853    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:24:22.035108    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:24:22.035118    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:24:22.048942    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:24:22.048952    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:24:22.062090    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:24:22.062100    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:24:22.088582    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:24:22.088593    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:24:22.101426    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:22.101440    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:22.126663    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:24:22.126671    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:22.137976    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:24:22.137987    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:24:22.152965    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:24:22.152976    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:24:24.671158    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:29.672303    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:29.672560    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:29.690693    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:24:29.690815    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:29.705010    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:24:29.705095    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:29.716910    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:24:29.716985    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:29.727564    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:24:29.727652    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:29.737571    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:24:29.737656    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:29.755728    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:24:29.755805    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:29.767557    3992 logs.go:276] 0 containers: []
	W0918 13:24:29.767568    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:29.767640    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:29.777525    3992 logs.go:276] 0 containers: []
	W0918 13:24:29.777536    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:24:29.777544    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:29.777549    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:29.801807    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:24:29.801815    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:29.818523    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:29.818534    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:29.855618    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:24:29.855633    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:24:29.870341    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:24:29.870351    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:24:29.883536    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:29.883546    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:29.921066    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:24:29.921074    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:24:29.935526    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:24:29.935538    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:24:29.947451    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:24:29.947464    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:24:29.961616    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:24:29.961626    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:24:29.975856    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:24:29.975867    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:24:30.000963    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:24:30.000983    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:24:30.012687    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:24:30.012699    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:24:30.024323    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:24:30.024337    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:24:30.042241    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:30.042251    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:32.548685    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:37.551412    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:37.552025    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:37.593074    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:24:37.593237    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:37.614114    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:24:37.614221    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:37.629062    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:24:37.629147    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:37.641917    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:24:37.641995    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:37.652957    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:24:37.653028    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:37.663969    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:24:37.664056    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:37.674454    3992 logs.go:276] 0 containers: []
	W0918 13:24:37.674466    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:37.674537    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:37.684481    3992 logs.go:276] 0 containers: []
	W0918 13:24:37.684494    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:24:37.684501    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:24:37.684506    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:24:37.696574    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:24:37.696583    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:24:37.709322    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:37.709332    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:37.714073    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:37.714080    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:37.748514    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:24:37.748525    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:24:37.763241    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:24:37.763252    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:24:37.778613    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:24:37.778627    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:24:37.796293    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:24:37.796303    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:24:37.813805    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:24:37.813815    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:24:37.839044    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:24:37.839053    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:24:37.850840    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:24:37.850854    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:24:37.868737    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:37.868747    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:37.894798    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:24:37.894806    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:37.906177    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:37.906193    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:37.944874    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:24:37.944883    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:24:40.464802    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:45.467054    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:45.467255    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:45.485895    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:24:45.485997    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:45.507765    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:24:45.507844    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:45.518856    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:24:45.518928    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:45.529334    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:24:45.529405    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:45.539359    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:24:45.539439    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:45.549664    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:24:45.549743    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:45.559582    3992 logs.go:276] 0 containers: []
	W0918 13:24:45.559596    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:45.559659    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:45.569889    3992 logs.go:276] 0 containers: []
	W0918 13:24:45.569903    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:24:45.569910    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:24:45.569915    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:45.582200    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:24:45.582211    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:24:45.606657    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:24:45.606672    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:24:45.631579    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:45.631594    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:45.665426    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:24:45.665435    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:24:45.677406    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:24:45.677415    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:24:45.700815    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:45.700831    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:45.739217    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:45.739227    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:45.743301    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:45.743310    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:45.768164    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:24:45.768173    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:24:45.782810    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:24:45.782826    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:24:45.799034    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:24:45.799045    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:24:45.811214    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:24:45.811225    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:24:45.824849    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:24:45.824859    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:24:45.838353    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:24:45.838366    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:24:48.351734    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:24:53.353959    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:24:53.354165    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:24:53.370107    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:24:53.370216    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:24:53.382086    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:24:53.382172    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:24:53.392398    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:24:53.392488    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:24:53.402781    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:24:53.402864    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:24:53.414559    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:24:53.414633    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:24:53.425421    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:24:53.425504    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:24:53.435772    3992 logs.go:276] 0 containers: []
	W0918 13:24:53.435783    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:24:53.435854    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:24:53.445162    3992 logs.go:276] 0 containers: []
	W0918 13:24:53.445174    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:24:53.445181    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:24:53.445189    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:24:53.483862    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:24:53.483871    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:24:53.517968    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:24:53.517979    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:24:53.532149    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:24:53.532160    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:24:53.549984    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:24:53.549994    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:24:53.561818    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:24:53.561828    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:24:53.573211    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:24:53.573221    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:24:53.584841    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:24:53.584852    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:24:53.596818    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:24:53.596827    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:24:53.609030    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:24:53.609041    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:24:53.622125    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:24:53.622134    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:24:53.645164    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:24:53.645171    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:24:53.649688    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:24:53.649694    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:24:53.663654    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:24:53.663667    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:24:53.687821    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:24:53.687834    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:24:56.211746    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:01.214027    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:01.214285    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:01.231743    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:01.231844    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:01.245748    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:01.245838    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:01.257007    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:01.257084    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:01.267539    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:01.267624    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:01.278784    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:01.278865    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:01.289480    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:01.289555    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:01.300118    3992 logs.go:276] 0 containers: []
	W0918 13:25:01.300130    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:01.300197    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:01.310083    3992 logs.go:276] 0 containers: []
	W0918 13:25:01.310100    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:01.310108    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:01.310115    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:01.323635    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:01.323646    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:01.338279    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:01.338289    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:01.350078    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:01.350088    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:01.363192    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:01.363202    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:01.388395    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:01.388404    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:01.422154    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:01.422164    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:01.435663    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:01.435672    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:01.446603    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:01.446615    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:01.458272    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:01.458283    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:01.496919    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:01.496939    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:01.514368    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:01.514378    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:01.525705    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:01.525720    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:01.529965    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:01.529975    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:01.554393    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:01.554402    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:04.068372    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:09.070506    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:09.071266    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:09.102026    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:09.102182    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:09.131929    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:09.132022    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:09.144481    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:09.144561    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:09.156003    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:09.156100    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:09.166476    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:09.166572    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:09.178854    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:09.178946    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:09.189667    3992 logs.go:276] 0 containers: []
	W0918 13:25:09.189681    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:09.189750    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:09.199778    3992 logs.go:276] 0 containers: []
	W0918 13:25:09.199788    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:09.199795    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:09.199801    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:09.241290    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:09.241302    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:09.252818    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:09.252831    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:09.273589    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:09.273600    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:09.309892    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:09.309901    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:09.323633    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:09.323643    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:09.339406    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:09.339417    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:09.370291    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:09.370306    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:09.374639    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:09.374647    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:09.388600    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:09.388611    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:09.412405    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:09.412419    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:09.427127    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:09.427137    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:09.439455    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:09.439467    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:09.451983    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:09.451995    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:09.476741    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:09.476749    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:11.990983    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:16.993589    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:16.994092    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:17.037128    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:17.037285    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:17.054726    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:17.054843    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:17.068236    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:17.068331    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:17.079853    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:17.079943    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:17.090792    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:17.090868    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:17.102190    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:17.102277    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:17.112654    3992 logs.go:276] 0 containers: []
	W0918 13:25:17.112665    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:17.112741    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:17.123164    3992 logs.go:276] 0 containers: []
	W0918 13:25:17.123176    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:17.123184    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:17.123189    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:17.137506    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:17.137517    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:17.152974    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:17.152984    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:17.178132    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:17.178140    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:17.216042    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:17.216049    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:17.233712    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:17.233725    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:17.251568    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:17.251578    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:17.265580    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:17.265589    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:17.277588    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:17.277598    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:17.290867    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:17.290881    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:17.302106    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:17.302118    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:17.347629    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:17.347647    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:17.373204    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:17.373215    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:17.384963    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:17.384974    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:17.396721    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:17.396731    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:19.903172    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:24.904452    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:24.904683    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:24.921378    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:24.921466    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:24.933369    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:24.933483    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:24.943762    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:24.943847    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:24.955024    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:24.955099    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:24.965706    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:24.965789    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:24.978011    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:24.978097    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:24.988163    3992 logs.go:276] 0 containers: []
	W0918 13:25:24.988174    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:24.988239    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:24.999946    3992 logs.go:276] 0 containers: []
	W0918 13:25:24.999962    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:24.999972    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:24.999977    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:25.005041    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:25.005048    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:25.016661    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:25.016678    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:25.032247    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:25.032257    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:25.047287    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:25.047297    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:25.071349    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:25.071357    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:25.082715    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:25.082725    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:25.121467    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:25.121477    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:25.156203    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:25.156216    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:25.174720    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:25.174731    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:25.190760    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:25.190773    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:25.208348    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:25.208359    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:25.221679    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:25.221690    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:25.253051    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:25.253063    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:25.267298    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:25.267308    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:27.784460    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:32.786709    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:32.787042    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:32.811702    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:32.811847    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:32.831704    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:32.831819    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:32.844201    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:32.844287    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:32.855725    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:32.855812    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:32.865956    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:32.866034    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:32.876640    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:32.876729    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:32.887982    3992 logs.go:276] 0 containers: []
	W0918 13:25:32.887997    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:32.888071    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:32.898596    3992 logs.go:276] 0 containers: []
	W0918 13:25:32.898609    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:32.898616    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:32.898621    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:32.911970    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:32.911980    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:32.947411    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:32.947420    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:32.961813    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:32.961824    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:32.979060    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:32.979070    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:33.017445    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:33.017455    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:33.021752    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:33.021757    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:33.040580    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:33.040589    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:33.052243    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:33.052255    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:33.066336    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:33.066345    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:33.090494    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:33.090505    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:33.107188    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:33.107197    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:33.119854    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:33.119868    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:33.130804    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:33.130816    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:33.142910    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:33.142922    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:35.668394    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:40.671000    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:40.671499    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:40.711977    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:40.712142    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:40.730823    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:40.730931    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:40.744319    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:40.744417    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:40.758200    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:40.758291    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:40.770025    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:40.770113    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:40.780849    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:40.780941    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:40.790814    3992 logs.go:276] 0 containers: []
	W0918 13:25:40.790830    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:40.790904    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:40.801623    3992 logs.go:276] 0 containers: []
	W0918 13:25:40.801635    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:40.801645    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:40.801651    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:40.838412    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:40.838420    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:40.849966    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:40.849975    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:40.868442    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:40.868453    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:40.881327    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:40.881337    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:40.918617    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:40.918632    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:40.931649    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:40.931667    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:40.945208    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:40.945218    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:40.969760    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:40.969770    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:40.995576    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:40.995587    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:41.009502    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:41.009513    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:41.013980    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:41.013987    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:41.030530    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:41.030544    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:41.044753    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:41.044764    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:41.057487    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:41.057499    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:43.571426    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:48.573791    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:48.574080    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:48.598060    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:48.598184    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:48.614896    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:48.614992    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:48.627493    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:48.627568    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:48.640583    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:48.640658    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:48.651508    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:48.651574    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:48.662030    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:48.662112    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:48.672253    3992 logs.go:276] 0 containers: []
	W0918 13:25:48.672266    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:48.672328    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:48.683017    3992 logs.go:276] 0 containers: []
	W0918 13:25:48.683027    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:48.683036    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:48.683046    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:48.695001    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:48.695009    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:48.709806    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:48.709818    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:48.747962    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:48.747974    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:48.762325    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:48.762338    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:48.788165    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:48.788179    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:48.802700    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:48.802716    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:48.821394    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:48.821405    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:48.845368    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:48.845380    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:48.857576    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:48.857587    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:48.862162    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:48.862169    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:48.874779    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:48.874792    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:48.909936    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:48.909950    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:48.929086    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:48.929096    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:48.940599    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:48.940611    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:51.455386    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:25:56.457547    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:25:56.457737    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:25:56.472729    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:25:56.472832    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:25:56.484519    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:25:56.484593    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:25:56.495637    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:25:56.495722    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:25:56.506181    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:25:56.506269    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:25:56.516819    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:25:56.516898    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:25:56.527563    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:25:56.527639    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:25:56.537619    3992 logs.go:276] 0 containers: []
	W0918 13:25:56.537631    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:25:56.537699    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:25:56.548391    3992 logs.go:276] 0 containers: []
	W0918 13:25:56.548401    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:25:56.548410    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:25:56.548416    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:25:56.553105    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:25:56.553115    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:25:56.570354    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:25:56.570365    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:25:56.586337    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:25:56.586346    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:25:56.600079    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:25:56.600089    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:25:56.612287    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:25:56.612297    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:25:56.623920    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:25:56.623934    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:25:56.636291    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:25:56.636302    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:25:56.661511    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:25:56.661522    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:25:56.699326    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:25:56.699337    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:25:56.729088    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:25:56.729103    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:25:56.743327    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:25:56.743337    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:25:56.756019    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:25:56.756032    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:25:56.794591    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:25:56.794599    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:25:56.811644    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:25:56.811655    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:25:59.325118    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:04.327388    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:04.327646    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:04.352297    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:04.352426    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:04.368476    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:04.368567    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:04.381081    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:04.381169    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:04.392592    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:04.392675    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:04.407498    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:04.407582    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:04.422000    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:04.422088    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:04.431879    3992 logs.go:276] 0 containers: []
	W0918 13:26:04.431890    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:04.431954    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:04.442233    3992 logs.go:276] 0 containers: []
	W0918 13:26:04.442243    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:04.442251    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:04.442256    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:04.455225    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:04.455240    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:04.479066    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:04.479074    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:04.504227    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:04.504239    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:04.508933    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:04.508943    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:04.543134    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:04.543145    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:04.559398    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:04.559411    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:04.573931    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:04.573949    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:04.591762    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:04.591774    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:04.630522    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:04.630531    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:04.641709    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:04.641720    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:04.656945    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:04.656959    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:04.674257    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:04.674268    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:04.687227    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:04.687238    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:04.701788    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:04.701800    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:07.215131    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:12.217377    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:12.217655    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:12.239438    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:12.239579    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:12.254251    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:12.254343    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:12.267274    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:12.267363    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:12.278236    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:12.278327    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:12.289518    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:12.289602    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:12.300759    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:12.300842    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:12.311226    3992 logs.go:276] 0 containers: []
	W0918 13:26:12.311237    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:12.311310    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:12.321292    3992 logs.go:276] 0 containers: []
	W0918 13:26:12.321306    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:12.321315    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:12.321321    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:12.333702    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:12.333715    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:12.345938    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:12.345953    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:12.357505    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:12.357517    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:12.361964    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:12.361970    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:12.377289    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:12.377300    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:12.391648    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:12.391661    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:12.404525    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:12.404540    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:12.418203    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:12.418217    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:12.456924    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:12.456935    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:12.491155    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:12.491170    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:12.509737    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:12.509749    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:12.527096    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:12.527106    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:12.550571    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:12.550579    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:12.575556    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:12.575566    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:15.089531    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:20.091835    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:20.092001    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:20.114389    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:20.114497    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:20.127218    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:20.127307    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:20.138326    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:20.138399    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:20.148832    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:20.148921    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:20.166779    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:20.166858    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:20.177299    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:20.177372    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:20.187341    3992 logs.go:276] 0 containers: []
	W0918 13:26:20.187353    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:20.187427    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:20.197591    3992 logs.go:276] 0 containers: []
	W0918 13:26:20.197602    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:20.197608    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:20.197614    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:20.202261    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:20.202271    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:20.216864    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:20.216874    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:20.240525    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:20.240533    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:20.279545    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:20.279556    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:20.313820    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:20.313831    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:20.331082    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:20.331094    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:20.344282    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:20.344294    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:20.358386    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:20.358396    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:20.384327    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:20.384336    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:20.398616    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:20.398626    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:20.410348    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:20.410358    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:20.425647    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:20.425656    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:20.438735    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:20.438745    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:20.450170    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:20.450182    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:22.964162    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:27.966560    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:27.966707    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:27.983795    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:27.983894    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:27.998780    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:27.998867    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:28.010903    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:28.010975    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:28.021317    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:28.021400    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:28.033449    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:28.033529    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:28.044210    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:28.044291    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:28.054954    3992 logs.go:276] 0 containers: []
	W0918 13:26:28.054966    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:28.055036    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:28.065433    3992 logs.go:276] 0 containers: []
	W0918 13:26:28.065448    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:28.065456    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:28.065463    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:28.104695    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:28.104705    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:28.109370    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:28.109379    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:28.123429    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:28.123443    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:28.135038    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:28.135050    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:28.148102    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:28.148115    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:28.162671    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:28.162682    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:28.196763    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:28.196774    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:28.211683    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:28.211693    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:28.223061    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:28.223072    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:28.234529    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:28.234538    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:28.260352    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:28.260368    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:28.272525    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:28.272538    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:28.290199    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:28.290213    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:28.314438    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:28.314446    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:30.831077    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:35.833396    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:35.833890    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:35.867851    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:35.868009    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:35.886838    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:35.886944    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:35.900248    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:35.900353    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:35.911241    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:35.911329    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:35.921780    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:35.921857    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:35.933127    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:35.933209    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:35.943778    3992 logs.go:276] 0 containers: []
	W0918 13:26:35.943793    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:35.943861    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:35.956687    3992 logs.go:276] 0 containers: []
	W0918 13:26:35.956700    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:35.956708    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:35.956714    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:35.970756    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:35.970767    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:35.995818    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:35.995827    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:36.012043    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:36.012057    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:36.024620    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:36.024631    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:36.047705    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:36.047716    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:36.084150    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:36.084161    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:36.095237    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:36.095249    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:36.108745    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:36.108754    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:36.131481    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:36.131489    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:36.149367    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:36.149379    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:36.163875    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:36.163889    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:36.175305    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:36.175320    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:36.179328    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:36.179333    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:36.191863    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:36.191872    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:38.727784    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:43.729967    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:43.730323    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:43.754726    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:43.754854    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:43.773460    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:43.773553    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:43.785865    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:43.785958    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:43.796537    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:43.796621    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:43.806856    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:43.806931    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:43.817542    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:43.817626    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:43.827911    3992 logs.go:276] 0 containers: []
	W0918 13:26:43.827925    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:43.827989    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:43.837878    3992 logs.go:276] 0 containers: []
	W0918 13:26:43.837892    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:43.837901    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:43.837907    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:43.841973    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:43.841982    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:43.856299    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:43.856309    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:43.880658    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:43.880667    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:43.919126    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:43.919134    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:43.960729    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:43.960742    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:43.974725    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:43.974736    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:43.998751    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:43.998768    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:44.021035    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:44.021049    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:44.035316    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:44.035328    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:44.047623    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:44.047635    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:44.072607    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:44.072618    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:44.090966    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:44.090978    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:44.102942    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:44.102953    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:44.119037    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:44.119048    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:46.634108    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:51.635772    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:51.636315    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:51.675456    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:51.675624    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:51.701562    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:51.701695    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:51.716187    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:51.716286    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:51.727977    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:51.728062    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:51.738682    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:51.738760    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:51.750929    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:51.751014    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:51.761422    3992 logs.go:276] 0 containers: []
	W0918 13:26:51.761433    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:51.761506    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:51.772602    3992 logs.go:276] 0 containers: []
	W0918 13:26:51.772618    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:51.772626    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:51.772631    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:51.811625    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:51.811633    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:51.823167    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:51.823179    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:51.835437    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:51.835449    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:51.859313    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:51.859321    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:51.894014    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:51.894024    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:51.908012    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:51.908023    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:51.920528    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:51.920539    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:51.934884    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:51.934901    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:51.968426    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:51.968440    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:51.980347    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:51.980356    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:51.998389    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:51.998397    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:52.009703    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:52.009713    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:26:52.013810    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:52.013819    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:52.030671    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:52.030684    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:54.546627    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:26:59.548884    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:26:59.548994    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:26:59.567234    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:26:59.567326    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:26:59.578854    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:26:59.578948    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:26:59.589566    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:26:59.589651    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:26:59.601542    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:26:59.601626    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:26:59.612922    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:26:59.613008    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:26:59.624667    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:26:59.624747    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:26:59.636301    3992 logs.go:276] 0 containers: []
	W0918 13:26:59.636357    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:26:59.636438    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:26:59.647739    3992 logs.go:276] 0 containers: []
	W0918 13:26:59.647752    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:26:59.647760    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:26:59.647766    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:26:59.675446    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:26:59.675460    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:26:59.687749    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:26:59.687760    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:26:59.724327    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:26:59.724342    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:26:59.736374    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:26:59.736388    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:26:59.748465    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:26:59.748477    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:26:59.760819    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:26:59.760834    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:26:59.772665    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:26:59.772678    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:26:59.812423    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:26:59.812436    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:26:59.835122    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:26:59.835137    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:26:59.849536    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:26:59.849549    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:26:59.865767    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:26:59.865783    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:26:59.889031    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:26:59.889043    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:26:59.903495    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:26:59.903511    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:26:59.927910    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:26:59.927924    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:27:02.433337    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:07.435519    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:07.435727    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:27:07.450581    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:27:07.450684    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:27:07.462412    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:27:07.462487    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:27:07.472872    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:27:07.472961    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:27:07.483103    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:27:07.483192    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:27:07.493462    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:27:07.493539    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:27:07.507651    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:27:07.507744    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:27:07.518864    3992 logs.go:276] 0 containers: []
	W0918 13:27:07.518876    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:27:07.518951    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:27:07.529306    3992 logs.go:276] 0 containers: []
	W0918 13:27:07.529322    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:27:07.529331    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:27:07.529336    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:27:07.567462    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:27:07.567470    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:27:07.580040    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:27:07.580052    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:27:07.615956    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:27:07.615971    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:27:07.629585    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:27:07.629600    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:27:07.641270    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:27:07.641281    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:27:07.659016    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:27:07.659032    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:27:07.672833    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:27:07.672847    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:27:07.677318    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:27:07.677328    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:27:07.703875    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:27:07.703913    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:27:07.718679    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:27:07.718690    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:27:07.730390    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:27:07.730400    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:27:07.744338    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:27:07.744348    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:27:07.755799    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:27:07.755811    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:27:07.767515    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:27:07.767531    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:27:10.290506    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:15.292521    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:15.292608    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:27:15.304068    3992 logs.go:276] 2 containers: [e316ead5668a f2971c3f4847]
	I0918 13:27:15.304155    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:27:15.314676    3992 logs.go:276] 2 containers: [cfad6d8d694e 56f7c42e2286]
	I0918 13:27:15.314755    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:27:15.324939    3992 logs.go:276] 1 containers: [e033d15d6cf7]
	I0918 13:27:15.325016    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:27:15.335557    3992 logs.go:276] 2 containers: [56799e7a27d6 17f70e497468]
	I0918 13:27:15.335648    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:27:15.346171    3992 logs.go:276] 1 containers: [e59b2801d2b1]
	I0918 13:27:15.346253    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:27:15.356727    3992 logs.go:276] 2 containers: [bdb335de5ba5 014c9f589a4f]
	I0918 13:27:15.356815    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:27:15.367158    3992 logs.go:276] 0 containers: []
	W0918 13:27:15.367169    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:27:15.367232    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:27:15.377436    3992 logs.go:276] 0 containers: []
	W0918 13:27:15.377446    3992 logs.go:278] No container was found matching "storage-provisioner"
	I0918 13:27:15.377456    3992 logs.go:123] Gathering logs for kube-scheduler [17f70e497468] ...
	I0918 13:27:15.377462    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17f70e497468"
	I0918 13:27:15.389215    3992 logs.go:123] Gathering logs for kube-controller-manager [bdb335de5ba5] ...
	I0918 13:27:15.389227    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb335de5ba5"
	I0918 13:27:15.407049    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:27:15.407058    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:27:15.429214    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:27:15.429222    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:27:15.467521    3992 logs.go:123] Gathering logs for kube-apiserver [e316ead5668a] ...
	I0918 13:27:15.467530    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e316ead5668a"
	I0918 13:27:15.481372    3992 logs.go:123] Gathering logs for kube-apiserver [f2971c3f4847] ...
	I0918 13:27:15.481382    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2971c3f4847"
	I0918 13:27:15.507920    3992 logs.go:123] Gathering logs for etcd [cfad6d8d694e] ...
	I0918 13:27:15.507939    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfad6d8d694e"
	I0918 13:27:15.522627    3992 logs.go:123] Gathering logs for kube-controller-manager [014c9f589a4f] ...
	I0918 13:27:15.522638    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 014c9f589a4f"
	I0918 13:27:15.535812    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:27:15.535823    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:27:15.548954    3992 logs.go:123] Gathering logs for etcd [56f7c42e2286] ...
	I0918 13:27:15.548966    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f7c42e2286"
	I0918 13:27:15.563541    3992 logs.go:123] Gathering logs for coredns [e033d15d6cf7] ...
	I0918 13:27:15.563550    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e033d15d6cf7"
	I0918 13:27:15.576716    3992 logs.go:123] Gathering logs for kube-scheduler [56799e7a27d6] ...
	I0918 13:27:15.576728    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56799e7a27d6"
	I0918 13:27:15.589606    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:27:15.589617    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:27:15.593707    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:27:15.593714    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:27:15.630430    3992 logs.go:123] Gathering logs for kube-proxy [e59b2801d2b1] ...
	I0918 13:27:15.630445    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e59b2801d2b1"
	I0918 13:27:18.148439    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:23.150531    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:23.150621    3992 kubeadm.go:597] duration metric: took 4m3.047120084s to restartPrimaryControlPlane
	W0918 13:27:23.150680    3992 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0918 13:27:23.150711    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0918 13:27:24.107814    3992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 13:27:24.112834    3992 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 13:27:24.115757    3992 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 13:27:24.118552    3992 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 13:27:24.118559    3992 kubeadm.go:157] found existing configuration files:
	
	I0918 13:27:24.118590    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/admin.conf
	I0918 13:27:24.121032    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 13:27:24.121062    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 13:27:24.124011    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/kubelet.conf
	I0918 13:27:24.127117    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 13:27:24.127146    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 13:27:24.129830    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/controller-manager.conf
	I0918 13:27:24.132424    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 13:27:24.132447    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 13:27:24.135573    3992 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/scheduler.conf
	I0918 13:27:24.138551    3992 kubeadm.go:163] "https://control-plane.minikube.internal:50335" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50335 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 13:27:24.138591    3992 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 13:27:24.141282    3992 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0918 13:27:24.156954    3992 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0918 13:27:24.157011    3992 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 13:27:24.214289    3992 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 13:27:24.214348    3992 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 13:27:24.214409    3992 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 13:27:24.264987    3992 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 13:27:24.269280    3992 out.go:235]   - Generating certificates and keys ...
	I0918 13:27:24.269317    3992 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 13:27:24.269352    3992 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 13:27:24.269413    3992 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0918 13:27:24.269464    3992 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0918 13:27:24.269557    3992 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0918 13:27:24.269617    3992 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0918 13:27:24.269649    3992 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0918 13:27:24.269686    3992 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0918 13:27:24.269728    3992 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0918 13:27:24.269772    3992 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0918 13:27:24.269794    3992 kubeadm.go:310] [certs] Using the existing "sa" key
	I0918 13:27:24.269823    3992 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 13:27:24.639180    3992 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 13:27:24.826786    3992 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 13:27:24.868300    3992 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 13:27:24.952020    3992 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 13:27:24.983414    3992 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 13:27:24.983843    3992 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 13:27:24.984014    3992 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 13:27:25.056088    3992 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 13:27:25.060241    3992 out.go:235]   - Booting up control plane ...
	I0918 13:27:25.060293    3992 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 13:27:25.060333    3992 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 13:27:25.060367    3992 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 13:27:25.067013    3992 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 13:27:25.067844    3992 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 13:27:29.570704    3992 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502489 seconds
	I0918 13:27:29.570765    3992 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 13:27:29.573954    3992 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 13:27:30.085041    3992 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 13:27:30.085209    3992 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-367000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 13:27:30.590751    3992 kubeadm.go:310] [bootstrap-token] Using token: bdspm0.fklw4sa7cic7hhpg
	I0918 13:27:30.596728    3992 out.go:235]   - Configuring RBAC rules ...
	I0918 13:27:30.596788    3992 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 13:27:30.596835    3992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 13:27:30.601165    3992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 13:27:30.602006    3992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 13:27:30.602922    3992 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 13:27:30.603817    3992 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 13:27:30.606937    3992 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 13:27:30.783063    3992 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 13:27:30.994873    3992 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 13:27:30.995384    3992 kubeadm.go:310] 
	I0918 13:27:30.995482    3992 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 13:27:30.995488    3992 kubeadm.go:310] 
	I0918 13:27:30.995543    3992 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 13:27:30.995548    3992 kubeadm.go:310] 
	I0918 13:27:30.995561    3992 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 13:27:30.995595    3992 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 13:27:30.995625    3992 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 13:27:30.995628    3992 kubeadm.go:310] 
	I0918 13:27:30.995656    3992 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 13:27:30.995659    3992 kubeadm.go:310] 
	I0918 13:27:30.995682    3992 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 13:27:30.995685    3992 kubeadm.go:310] 
	I0918 13:27:30.995783    3992 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 13:27:30.995842    3992 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 13:27:30.995882    3992 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 13:27:30.995889    3992 kubeadm.go:310] 
	I0918 13:27:30.995925    3992 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 13:27:30.996074    3992 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 13:27:30.996084    3992 kubeadm.go:310] 
	I0918 13:27:30.996129    3992 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bdspm0.fklw4sa7cic7hhpg \
	I0918 13:27:30.996189    3992 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:491fed232b633ec8404b91d551b715c799429ab9f4658c5350f7586533e73a75 \
	I0918 13:27:30.996202    3992 kubeadm.go:310] 	--control-plane 
	I0918 13:27:30.996204    3992 kubeadm.go:310] 
	I0918 13:27:30.996251    3992 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 13:27:30.996253    3992 kubeadm.go:310] 
	I0918 13:27:30.996296    3992 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bdspm0.fklw4sa7cic7hhpg \
	I0918 13:27:30.996372    3992 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:491fed232b633ec8404b91d551b715c799429ab9f4658c5350f7586533e73a75 
	I0918 13:27:30.996431    3992 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 13:27:30.996438    3992 cni.go:84] Creating CNI manager for ""
	I0918 13:27:30.996445    3992 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:27:31.000983    3992 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 13:27:31.005972    3992 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 13:27:31.029521    3992 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 13:27:31.035351    3992 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 13:27:31.035422    3992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 13:27:31.035497    3992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-367000 minikube.k8s.io/updated_at=2024_09_18T13_27_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=stopped-upgrade-367000 minikube.k8s.io/primary=true
	I0918 13:27:31.066961    3992 ops.go:34] apiserver oom_adj: -16
	I0918 13:27:31.066961    3992 kubeadm.go:1113] duration metric: took 31.599167ms to wait for elevateKubeSystemPrivileges
	I0918 13:27:31.082511    3992 kubeadm.go:394] duration metric: took 4m10.992796583s to StartCluster
	I0918 13:27:31.082530    3992 settings.go:142] acquiring lock: {Name:mkbb043d0459391a7d922bd686e90e22968feef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:27:31.082613    3992 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:27:31.083004    3992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/kubeconfig: {Name:mkc39e19086c32e3258f75506afcbcc582926b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:27:31.083187    3992 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:27:31.083280    3992 config.go:182] Loaded profile config "stopped-upgrade-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0918 13:27:31.083241    3992 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 13:27:31.083311    3992 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-367000"
	I0918 13:27:31.083322    3992 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-367000"
	W0918 13:27:31.083327    3992 addons.go:243] addon storage-provisioner should already be in state true
	I0918 13:27:31.083339    3992 host.go:66] Checking if "stopped-upgrade-367000" exists ...
	I0918 13:27:31.083353    3992 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-367000"
	I0918 13:27:31.083361    3992 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-367000"
	I0918 13:27:31.084410    3992 kapi.go:59] client config for stopped-upgrade-367000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.key", CAFile:"/Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e05800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 13:27:31.084530    3992 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-367000"
	W0918 13:27:31.084537    3992 addons.go:243] addon default-storageclass should already be in state true
	I0918 13:27:31.084544    3992 host.go:66] Checking if "stopped-upgrade-367000" exists ...
	I0918 13:27:31.086873    3992 out.go:177] * Verifying Kubernetes components...
	I0918 13:27:31.087272    3992 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 13:27:31.091106    3992 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 13:27:31.091112    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	I0918 13:27:31.094906    3992 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 13:27:31.098961    3992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 13:27:31.102948    3992 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 13:27:31.102961    3992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 13:27:31.102969    3992 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/stopped-upgrade-367000/id_rsa Username:docker}
	I0918 13:27:31.170610    3992 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 13:27:31.177063    3992 api_server.go:52] waiting for apiserver process to appear ...
	I0918 13:27:31.177128    3992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 13:27:31.181788    3992 api_server.go:72] duration metric: took 98.591917ms to wait for apiserver process to appear ...
	I0918 13:27:31.181797    3992 api_server.go:88] waiting for apiserver healthz status ...
	I0918 13:27:31.181805    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:31.187263    3992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 13:27:31.202955    3992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 13:27:31.592439    3992 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0918 13:27:31.592451    3992 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0918 13:27:36.183294    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:36.183318    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:41.183621    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:41.183642    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:46.183725    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:46.183772    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:51.183967    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:51.184003    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:27:56.184342    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:27:56.184369    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:01.184756    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:01.184776    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0918 13:28:01.593894    3992 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0918 13:28:01.603062    3992 out.go:177] * Enabled addons: storage-provisioner
	I0918 13:28:01.610058    3992 addons.go:510] duration metric: took 30.527649167s for enable addons: enabled=[storage-provisioner]
	I0918 13:28:06.185307    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:06.185351    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:11.186187    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:11.186213    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:16.187188    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:16.187235    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:21.187439    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:21.187464    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:26.188778    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:26.188823    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:31.190293    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:31.190463    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:31.202458    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:28:31.202541    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:31.213129    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:28:31.213212    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:31.223286    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:28:31.223357    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:31.234055    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:28:31.234133    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:31.245050    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:28:31.245129    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:31.255761    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:28:31.255832    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:31.266023    3992 logs.go:276] 0 containers: []
	W0918 13:28:31.266035    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:31.266104    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:31.276267    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:28:31.276291    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:31.276296    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:31.281100    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:28:31.281108    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:28:31.304755    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:28:31.304766    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:28:31.316400    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:28:31.316411    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:28:31.328236    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:28:31.328246    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:28:31.345770    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:28:31.345780    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:28:31.357295    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:28:31.357305    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:31.368455    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:31.368465    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:31.402288    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:28:31.402300    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:28:31.416190    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:28:31.416201    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:28:31.428261    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:28:31.428272    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:28:31.443312    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:31.443321    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:31.467958    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:31.467977    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:34.007891    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:39.010104    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:39.010217    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:39.021968    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:28:39.022059    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:39.033210    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:28:39.033294    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:39.043275    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:28:39.043359    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:39.054344    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:28:39.054429    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:39.064718    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:28:39.064802    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:39.074755    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:28:39.074838    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:39.084976    3992 logs.go:276] 0 containers: []
	W0918 13:28:39.084987    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:39.085057    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:39.095398    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:28:39.095417    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:28:39.095424    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:28:39.109616    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:39.109626    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:39.145578    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:28:39.145589    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:28:39.162825    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:28:39.162839    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:28:39.174461    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:28:39.174470    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:28:39.186812    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:28:39.186823    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:28:39.198556    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:28:39.198565    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:28:39.216718    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:28:39.216730    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:28:39.229631    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:39.229643    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:39.254796    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:39.254806    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:39.289785    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:39.289793    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:39.293721    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:28:39.293730    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:28:39.308118    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:28:39.308128    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:41.821684    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:46.823251    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:46.823576    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:46.848348    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:28:46.848533    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:46.866964    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:28:46.867086    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:46.879725    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:28:46.879824    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:46.891600    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:28:46.891703    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:46.902262    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:28:46.902367    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:46.912487    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:28:46.912578    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:46.926486    3992 logs.go:276] 0 containers: []
	W0918 13:28:46.926500    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:46.926580    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:46.937514    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:28:46.937533    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:28:46.937539    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:28:46.948995    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:46.949005    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:46.972722    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:46.972734    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:46.977208    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:28:46.977215    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:28:46.988843    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:28:46.988853    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:28:47.004300    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:28:47.004312    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:28:47.021807    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:28:47.021820    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:28:47.036047    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:28:47.036058    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:28:47.051272    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:28:47.051287    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:47.063398    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:47.063413    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:47.097167    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:47.097178    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:47.133771    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:28:47.133783    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:28:47.148210    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:28:47.148221    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:28:49.664225    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:28:54.666411    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:28:54.666595    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:28:54.679607    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:28:54.679696    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:28:54.691123    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:28:54.691211    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:28:54.701508    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:28:54.701589    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:28:54.712047    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:28:54.712132    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:28:54.722533    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:28:54.722620    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:28:54.732869    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:28:54.732957    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:28:54.743135    3992 logs.go:276] 0 containers: []
	W0918 13:28:54.743147    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:28:54.743227    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:28:54.754937    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:28:54.754953    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:28:54.754958    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:28:54.767196    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:28:54.767206    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:28:54.786203    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:28:54.786214    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:28:54.797033    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:28:54.797044    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:28:54.830730    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:28:54.830739    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:28:54.846846    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:28:54.846857    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:28:54.860548    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:28:54.860558    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:28:54.872374    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:28:54.872387    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:28:54.900629    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:28:54.900639    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:28:54.925486    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:28:54.925496    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:28:54.937646    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:28:54.937657    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:28:54.942187    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:28:54.942199    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:28:54.977025    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:28:54.977036    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:28:57.489911    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:02.492029    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:02.492290    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:02.509531    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:02.509631    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:02.522335    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:02.522429    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:02.533541    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:02.533631    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:02.543818    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:02.543901    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:02.554356    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:02.554435    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:02.564733    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:02.564817    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:02.575206    3992 logs.go:276] 0 containers: []
	W0918 13:29:02.575216    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:02.575281    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:02.585996    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:02.586012    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:02.586018    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:02.597629    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:02.597640    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:02.612615    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:02.612628    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:02.625101    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:02.625114    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:02.636584    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:02.636594    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:02.660032    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:02.660041    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:02.671772    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:02.671782    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:02.708548    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:02.708559    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:02.723591    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:02.723602    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:02.737635    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:02.737649    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:02.748757    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:02.748767    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:02.765679    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:02.765691    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:02.798643    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:02.798651    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:05.305056    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:10.307084    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:10.307566    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:10.321535    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:10.321635    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:10.333273    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:10.333349    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:10.346448    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:10.346519    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:10.356763    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:10.356831    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:10.367433    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:10.367525    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:10.380092    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:10.380181    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:10.390456    3992 logs.go:276] 0 containers: []
	W0918 13:29:10.390470    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:10.390545    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:10.400563    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:10.400578    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:10.400583    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:10.416280    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:10.416290    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:10.427681    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:10.427690    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:10.463920    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:10.463934    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:10.499216    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:10.499227    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:10.513500    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:10.513510    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:10.527569    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:10.527585    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:10.539254    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:10.539265    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:10.550830    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:10.550841    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:10.555487    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:10.555495    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:10.567640    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:10.567650    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:10.582708    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:10.582718    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:10.599678    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:10.599688    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:13.126578    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:18.128798    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:18.128998    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:18.141230    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:18.141328    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:18.152232    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:18.152316    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:18.163086    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:18.163164    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:18.173561    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:18.173648    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:18.184203    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:18.184294    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:18.195296    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:18.195369    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:18.210276    3992 logs.go:276] 0 containers: []
	W0918 13:29:18.210288    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:18.210363    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:18.221106    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:18.221123    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:18.221129    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:18.235166    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:18.235178    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:18.250102    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:18.250113    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:18.268188    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:18.268199    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:18.280137    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:18.280151    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:18.315896    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:18.315905    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:18.320451    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:18.320460    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:18.356294    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:18.356303    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:18.370707    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:18.370718    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:18.384882    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:18.384892    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:18.396551    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:18.396566    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:18.409214    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:18.409226    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:18.433669    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:18.433678    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:20.947069    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:25.949219    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:25.949423    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:25.966821    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:25.966927    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:25.981164    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:25.981257    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:25.992561    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:25.992638    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:26.003426    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:26.003502    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:26.015989    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:26.016075    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:26.029913    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:26.029991    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:26.040189    3992 logs.go:276] 0 containers: []
	W0918 13:29:26.040201    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:26.040267    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:26.057270    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:26.057285    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:26.057291    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:26.072097    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:26.072110    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:26.084034    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:26.084047    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:26.112921    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:26.112930    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:26.117187    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:26.117194    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:26.152539    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:26.152550    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:26.166946    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:26.166957    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:26.182205    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:26.182215    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:26.193473    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:26.193486    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:26.209368    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:26.209379    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:26.221532    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:26.221542    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:26.239363    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:26.239371    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:26.275812    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:26.275828    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:28.789467    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:33.791620    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:33.791761    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:33.805900    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:33.806010    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:33.817333    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:33.817405    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:33.827526    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:33.827609    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:33.838083    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:33.838155    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:33.848694    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:33.848762    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:33.859392    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:33.859473    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:33.869753    3992 logs.go:276] 0 containers: []
	W0918 13:29:33.869766    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:33.869837    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:33.879970    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:33.879984    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:33.879991    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:33.919124    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:33.919133    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:33.933357    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:33.933367    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:33.967454    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:33.967464    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:33.972156    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:33.972162    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:33.986197    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:33.986208    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:33.997983    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:33.997996    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:34.009899    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:34.009909    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:34.024197    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:34.024205    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:34.036407    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:34.036423    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:34.054341    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:34.054351    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:34.068557    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:34.068566    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:34.093319    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:34.093332    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:36.606687    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:41.608968    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:41.609205    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:41.626019    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:41.626123    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:41.639696    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:41.639778    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:41.651543    3992 logs.go:276] 2 containers: [cd3fdcf67e60 519acac36e74]
	I0918 13:29:41.651629    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:41.662409    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:41.662490    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:41.672822    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:41.672909    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:41.683065    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:41.683143    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:41.693305    3992 logs.go:276] 0 containers: []
	W0918 13:29:41.693316    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:41.693387    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:41.704100    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:41.704117    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:41.704124    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:41.708981    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:41.708989    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:41.723513    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:41.723529    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:41.738206    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:41.738216    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:41.749733    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:41.749745    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:41.766972    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:41.766986    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:41.778629    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:41.778644    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:41.802085    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:41.802105    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:41.838376    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:41.838389    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:41.874848    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:41.874859    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:41.889472    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:41.889482    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:41.900905    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:41.900916    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:41.912457    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:41.912469    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:44.427008    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:49.429150    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:49.429368    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:49.441338    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:49.441427    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:49.457751    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:49.457835    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:49.468578    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:29:49.468664    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:49.479085    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:49.479168    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:49.489871    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:49.489941    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:49.500585    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:49.500711    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:49.511075    3992 logs.go:276] 0 containers: []
	W0918 13:29:49.511087    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:49.511154    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:49.523723    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:49.523739    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:49.523746    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:49.558075    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:29:49.558090    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:29:49.570127    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:49.570138    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:49.581711    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:49.581724    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:49.599045    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:49.599059    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:49.624408    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:49.624416    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:49.628730    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:49.628737    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:49.640616    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:49.640631    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:49.655299    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:49.655308    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:49.667418    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:49.667430    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:49.701134    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:49.701149    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:29:49.714975    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:49.714989    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:49.726195    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:49.726209    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:49.740455    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:29:49.740469    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:29:49.752046    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:49.752060    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:52.268406    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:29:57.270786    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:29:57.271047    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:29:57.290026    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:29:57.290129    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:29:57.306659    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:29:57.306738    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:29:57.317472    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:29:57.317558    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:29:57.329983    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:29:57.330060    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:29:57.340800    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:29:57.340877    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:29:57.351513    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:29:57.351585    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:29:57.362121    3992 logs.go:276] 0 containers: []
	W0918 13:29:57.362134    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:29:57.362204    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:29:57.373237    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:29:57.373256    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:29:57.373261    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:29:57.387982    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:29:57.387995    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:29:57.401806    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:29:57.401818    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:29:57.413570    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:29:57.413580    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:29:57.425603    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:29:57.425613    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:29:57.437505    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:29:57.437517    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:29:57.450037    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:29:57.450048    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:29:57.484469    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:29:57.484481    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:29:57.501847    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:29:57.501858    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:29:57.538002    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:29:57.538013    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:29:57.552738    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:29:57.552751    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:29:57.564667    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:29:57.564677    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:29:57.583040    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:29:57.583050    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:29:57.607010    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:29:57.607021    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:29:57.612213    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:29:57.612224    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:00.139639    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:05.141756    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:05.141882    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:05.153287    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:05.153371    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:05.163964    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:05.164046    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:05.177812    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:05.177895    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:05.194258    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:05.194349    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:05.206406    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:05.206483    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:05.216612    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:05.216688    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:05.226357    3992 logs.go:276] 0 containers: []
	W0918 13:30:05.226369    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:05.226437    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:05.236642    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:05.236661    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:05.236667    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:05.258765    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:05.258775    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:05.270189    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:05.270202    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:05.282268    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:05.282278    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:05.299872    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:05.299883    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:05.315251    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:05.315263    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:05.332976    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:05.332986    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:05.344633    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:05.344645    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:05.358415    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:05.358429    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:05.370347    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:05.370362    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:05.381710    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:05.381720    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:05.405778    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:05.405787    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:05.438919    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:05.438931    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:05.443458    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:05.443465    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:05.480592    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:05.480604    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:07.997045    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:12.999175    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:12.999416    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:13.014212    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:13.014309    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:13.027221    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:13.027323    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:13.038721    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:13.038808    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:13.053341    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:13.053427    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:13.063966    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:13.064054    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:13.075684    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:13.075770    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:13.086593    3992 logs.go:276] 0 containers: []
	W0918 13:30:13.086604    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:13.086677    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:13.097448    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:13.097465    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:13.097471    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:13.113344    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:13.113355    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:13.124879    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:13.124889    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:13.139245    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:13.139255    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:13.157758    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:13.157769    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:13.193082    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:13.193093    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:13.227827    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:13.227838    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:13.239594    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:13.239607    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:13.252752    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:13.252767    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:13.265358    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:13.265373    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:13.276881    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:13.276897    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:13.281052    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:13.281059    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:13.295201    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:13.295213    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:13.307358    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:13.307372    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:13.319532    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:13.319547    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:15.845742    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:20.848020    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:20.848223    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:20.869294    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:20.869394    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:20.880240    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:20.880328    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:20.891147    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:20.891230    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:20.905324    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:20.905414    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:20.919505    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:20.919593    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:20.929849    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:20.929934    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:20.940992    3992 logs.go:276] 0 containers: []
	W0918 13:30:20.941004    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:20.941071    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:20.951489    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:20.951505    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:20.951511    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:20.965713    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:20.965724    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:20.980237    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:20.980253    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:20.993578    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:20.993589    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:21.009592    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:21.009604    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:21.024229    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:21.024240    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:21.036507    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:21.036517    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:21.054447    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:21.054460    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:21.089577    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:21.089592    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:21.102538    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:21.102549    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:21.128511    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:21.128522    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:21.140864    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:21.140880    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:21.145687    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:21.145695    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:21.157390    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:21.157402    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:21.168784    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:21.168800    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:23.709023    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:28.711262    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:28.711537    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:28.733277    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:28.733407    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:28.749248    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:28.749355    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:28.762689    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:28.762770    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:28.773823    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:28.773906    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:28.784039    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:28.784122    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:28.794345    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:28.794421    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:28.804464    3992 logs.go:276] 0 containers: []
	W0918 13:30:28.804475    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:28.804540    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:28.814843    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:28.814864    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:28.814870    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:28.826615    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:28.826625    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:28.851547    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:28.851562    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:28.886289    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:28.886302    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:28.900826    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:28.900838    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:28.914522    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:28.914532    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:28.926726    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:28.926736    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:28.938796    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:28.938808    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:28.951141    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:28.951152    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:28.965595    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:28.965612    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:28.983697    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:28.983707    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:29.018502    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:29.018519    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:29.030592    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:29.030604    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:29.042338    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:29.042348    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:29.047154    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:29.047161    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:31.561012    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:36.563260    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:36.563584    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:36.591245    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:36.591400    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:36.608266    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:36.608375    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:36.621793    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:36.621892    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:36.633232    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:36.633316    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:36.643599    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:36.643680    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:36.654090    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:36.654172    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:36.664352    3992 logs.go:276] 0 containers: []
	W0918 13:30:36.664366    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:36.664442    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:36.674826    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:36.674848    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:36.674854    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:36.693206    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:36.693217    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:36.704718    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:36.704728    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:36.709028    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:36.709034    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:36.723113    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:36.723123    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:36.734728    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:36.734742    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:36.769852    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:36.769860    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:36.794664    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:36.794673    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:36.810914    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:36.810925    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:36.822325    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:36.822335    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:36.834294    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:36.834304    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:36.848504    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:36.848514    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:36.860540    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:36.860553    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:36.898223    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:36.898234    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:36.913050    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:36.913059    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:39.430249    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:44.432484    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:44.432622    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:44.447365    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:44.447461    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:44.459775    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:44.459868    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:44.470631    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:44.470709    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:44.481444    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:44.481516    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:44.492029    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:44.492110    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:44.502644    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:44.502721    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:44.512758    3992 logs.go:276] 0 containers: []
	W0918 13:30:44.512770    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:44.512847    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:44.523074    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:44.523091    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:44.523096    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:44.535263    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:44.535272    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:44.569597    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:44.569613    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:44.584244    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:44.584259    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:44.596157    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:44.596168    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:44.630671    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:44.630679    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:44.647317    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:44.647332    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:44.658458    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:44.658470    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:44.673633    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:44.673649    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:44.685048    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:44.685061    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:44.697504    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:44.697519    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:44.701749    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:44.701755    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:44.713774    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:44.713783    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:44.730752    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:44.730763    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:44.754897    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:44.754906    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:47.268728    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:30:52.269376    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:30:52.269771    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:30:52.299981    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:30:52.300135    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:30:52.317755    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:30:52.317856    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:30:52.331964    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:30:52.332061    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:30:52.343790    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:30:52.343876    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:30:52.354379    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:30:52.354463    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:30:52.364800    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:30:52.364888    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:30:52.379185    3992 logs.go:276] 0 containers: []
	W0918 13:30:52.379196    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:30:52.379266    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:30:52.390138    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:30:52.390165    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:30:52.390171    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:30:52.394976    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:30:52.394983    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:30:52.409261    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:30:52.409275    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:30:52.424944    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:30:52.424957    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:30:52.437104    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:30:52.437119    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:30:52.473553    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:30:52.473569    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:30:52.507741    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:30:52.507750    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:30:52.518901    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:30:52.518916    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:30:52.536348    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:30:52.536358    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:30:52.551358    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:30:52.551372    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:30:52.563339    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:30:52.563350    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:30:52.574816    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:30:52.574826    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:30:52.589717    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:30:52.589726    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:30:52.600763    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:30:52.600773    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:30:52.625503    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:30:52.625511    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:30:55.140051    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:31:00.142256    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:31:00.142587    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:31:00.171209    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:31:00.171350    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:31:00.188547    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:31:00.188654    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:31:00.202281    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:31:00.202374    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:31:00.214676    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:31:00.214762    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:31:00.225292    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:31:00.225379    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:31:00.236956    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:31:00.237028    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:31:00.247756    3992 logs.go:276] 0 containers: []
	W0918 13:31:00.247767    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:31:00.247832    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:31:00.258464    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:31:00.258486    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:31:00.258492    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:31:00.270576    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:31:00.270586    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:31:00.288282    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:31:00.288292    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:31:00.299523    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:31:00.299535    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:31:00.324122    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:31:00.324133    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:31:00.328107    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:31:00.328116    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:31:00.342235    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:31:00.342245    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:31:00.354100    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:31:00.354113    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:31:00.365352    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:31:00.365367    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:31:00.398805    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:31:00.398814    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:31:00.413105    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:31:00.413116    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:31:00.425069    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:31:00.425079    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:31:00.436923    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:31:00.436939    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:31:00.472206    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:31:00.472221    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:31:00.483944    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:31:00.483954    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:31:03.000842    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:31:08.003080    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:31:08.003351    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:31:08.025270    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:31:08.025383    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:31:08.040513    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:31:08.040603    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:31:08.053143    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:31:08.053236    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:31:08.065075    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:31:08.065165    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:31:08.100658    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:31:08.100750    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:31:08.114138    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:31:08.114220    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:31:08.124807    3992 logs.go:276] 0 containers: []
	W0918 13:31:08.124823    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:31:08.124891    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:31:08.135583    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:31:08.135603    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:31:08.135609    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:31:08.148157    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:31:08.148171    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:31:08.159644    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:31:08.159657    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:31:08.177164    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:31:08.177177    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:31:08.188940    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:31:08.188952    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:31:08.203726    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:31:08.203737    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:31:08.217843    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:31:08.217853    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:31:08.229380    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:31:08.229394    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:31:08.247394    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:31:08.247410    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:31:08.280987    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:31:08.280997    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:31:08.293728    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:31:08.293738    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:31:08.307650    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:31:08.307658    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:31:08.333253    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:31:08.333266    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:31:08.367388    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:31:08.367397    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:31:08.371704    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:31:08.371710    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:31:10.883763    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:31:15.886068    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:31:15.886348    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:31:15.908009    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:31:15.908157    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:31:15.927375    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:31:15.927462    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:31:15.939907    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:31:15.939997    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:31:15.950454    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:31:15.950538    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:31:15.964299    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:31:15.964383    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:31:15.975124    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:31:15.975209    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:31:15.985573    3992 logs.go:276] 0 containers: []
	W0918 13:31:15.985588    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:31:15.985659    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:31:15.996138    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:31:15.996159    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:31:15.996165    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:31:16.011061    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:31:16.011071    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:31:16.027434    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:31:16.027446    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:31:16.061148    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:31:16.061159    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:31:16.073588    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:31:16.073598    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:31:16.085748    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:31:16.085758    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:31:16.098513    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:31:16.098525    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:31:16.123598    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:31:16.123608    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:31:16.127841    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:31:16.127851    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:31:16.147219    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:31:16.147231    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:31:16.164980    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:31:16.164990    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:31:16.176750    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:31:16.176761    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:31:16.188282    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:31:16.188291    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:31:16.223295    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:31:16.223305    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:31:16.235445    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:31:16.235457    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:31:18.755344    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:31:23.757456    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:31:23.757640    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 13:31:23.773537    3992 logs.go:276] 1 containers: [cd7041dc6a76]
	I0918 13:31:23.773628    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 13:31:23.786494    3992 logs.go:276] 1 containers: [61ef46744fc2]
	I0918 13:31:23.786587    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 13:31:23.797677    3992 logs.go:276] 4 containers: [7fa175d9d338 4c69a1e4be63 cd3fdcf67e60 519acac36e74]
	I0918 13:31:23.797793    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 13:31:23.807751    3992 logs.go:276] 1 containers: [24979147a7f5]
	I0918 13:31:23.807833    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 13:31:23.818484    3992 logs.go:276] 1 containers: [ea76c830b5bc]
	I0918 13:31:23.818570    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 13:31:23.834733    3992 logs.go:276] 1 containers: [067d32c12bb9]
	I0918 13:31:23.834808    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 13:31:23.845808    3992 logs.go:276] 0 containers: []
	W0918 13:31:23.845819    3992 logs.go:278] No container was found matching "kindnet"
	I0918 13:31:23.845891    3992 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 13:31:23.856287    3992 logs.go:276] 1 containers: [719df67be247]
	I0918 13:31:23.856308    3992 logs.go:123] Gathering logs for Docker ...
	I0918 13:31:23.856315    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 13:31:23.879294    3992 logs.go:123] Gathering logs for kubelet ...
	I0918 13:31:23.879303    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0918 13:31:23.912993    3992 logs.go:123] Gathering logs for kube-apiserver [cd7041dc6a76] ...
	I0918 13:31:23.913001    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd7041dc6a76"
	I0918 13:31:23.927271    3992 logs.go:123] Gathering logs for coredns [4c69a1e4be63] ...
	I0918 13:31:23.927283    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c69a1e4be63"
	I0918 13:31:23.939713    3992 logs.go:123] Gathering logs for coredns [cd3fdcf67e60] ...
	I0918 13:31:23.939727    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd3fdcf67e60"
	I0918 13:31:23.960964    3992 logs.go:123] Gathering logs for kube-proxy [ea76c830b5bc] ...
	I0918 13:31:23.960975    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76c830b5bc"
	I0918 13:31:23.973815    3992 logs.go:123] Gathering logs for container status ...
	I0918 13:31:23.973827    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 13:31:23.985703    3992 logs.go:123] Gathering logs for etcd [61ef46744fc2] ...
	I0918 13:31:23.985720    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ef46744fc2"
	I0918 13:31:23.999838    3992 logs.go:123] Gathering logs for coredns [7fa175d9d338] ...
	I0918 13:31:23.999849    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fa175d9d338"
	I0918 13:31:24.010931    3992 logs.go:123] Gathering logs for coredns [519acac36e74] ...
	I0918 13:31:24.010942    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 519acac36e74"
	I0918 13:31:24.022950    3992 logs.go:123] Gathering logs for kube-scheduler [24979147a7f5] ...
	I0918 13:31:24.022963    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24979147a7f5"
	I0918 13:31:24.044050    3992 logs.go:123] Gathering logs for storage-provisioner [719df67be247] ...
	I0918 13:31:24.044061    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719df67be247"
	I0918 13:31:24.056167    3992 logs.go:123] Gathering logs for dmesg ...
	I0918 13:31:24.056184    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 13:31:24.060326    3992 logs.go:123] Gathering logs for describe nodes ...
	I0918 13:31:24.060333    3992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 13:31:24.094561    3992 logs.go:123] Gathering logs for kube-controller-manager [067d32c12bb9] ...
	I0918 13:31:24.094574    3992 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 067d32c12bb9"
	I0918 13:31:26.614407    3992 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0918 13:31:31.616629    3992 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0918 13:31:31.622090    3992 out.go:201] 
	W0918 13:31:31.626044    3992 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0918 13:31:31.626051    3992 out.go:270] * 
	* 
	W0918 13:31:31.626565    3992 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:31:31.637973    3992 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-367000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (606.99s)

                                                
                                    
x
+
TestPause/serial/Start (10.17s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-864000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-864000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.138980458s)

                                                
                                                
-- stdout --
	* [pause-864000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-864000" primary control-plane node in "pause-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-864000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-864000 -n pause-864000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-864000 -n pause-864000: exit status 7 (33.065458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-864000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.17s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.89s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19667
- KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current451122768/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.89s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.73s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-718000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-718000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.965287875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-718000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-718000" primary control-plane node in "old-k8s-version-718000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-718000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:32:38.321090    4853 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:32:38.321220    4853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:38.321223    4853 out.go:358] Setting ErrFile to fd 2...
	I0918 13:32:38.321225    4853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:38.321368    4853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:32:38.322427    4853 out.go:352] Setting JSON to false
	I0918 13:32:38.338626    4853 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3717,"bootTime":1726687841,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:32:38.338692    4853 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:32:38.346832    4853 out.go:177] * [old-k8s-version-718000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:32:38.354788    4853 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:32:38.354831    4853 notify.go:220] Checking for updates...
	I0918 13:32:38.361766    4853 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:32:38.364751    4853 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:32:38.367758    4853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:32:38.370769    4853 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:32:38.372200    4853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:32:38.376160    4853 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:32:38.376229    4853 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:32:38.376290    4853 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:32:38.380805    4853 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:32:38.386703    4853 start.go:297] selected driver: qemu2
	I0918 13:32:38.386711    4853 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:32:38.386717    4853 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:32:38.389126    4853 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:32:38.391847    4853 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:32:38.394832    4853 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:32:38.394847    4853 cni.go:84] Creating CNI manager for ""
	I0918 13:32:38.394868    4853 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 13:32:38.394899    4853 start.go:340] cluster config:
	{Name:old-k8s-version-718000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-718000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:32:38.398655    4853 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:38.406888    4853 out.go:177] * Starting "old-k8s-version-718000" primary control-plane node in "old-k8s-version-718000" cluster
	I0918 13:32:38.410735    4853 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 13:32:38.410753    4853 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0918 13:32:38.410760    4853 cache.go:56] Caching tarball of preloaded images
	I0918 13:32:38.410837    4853 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:32:38.410844    4853 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0918 13:32:38.410916    4853 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/old-k8s-version-718000/config.json ...
	I0918 13:32:38.410927    4853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/old-k8s-version-718000/config.json: {Name:mke7dd68faee131747c33556ce856f62c22a17ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:32:38.411155    4853 start.go:360] acquireMachinesLock for old-k8s-version-718000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:32:38.411192    4853 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "old-k8s-version-718000"
	I0918 13:32:38.411204    4853 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-718000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-718000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:32:38.411236    4853 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:32:38.417777    4853 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:32:38.437922    4853 start.go:159] libmachine.API.Create for "old-k8s-version-718000" (driver="qemu2")
	I0918 13:32:38.437958    4853 client.go:168] LocalClient.Create starting
	I0918 13:32:38.438021    4853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:32:38.438052    4853 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:38.438062    4853 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:38.438100    4853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:32:38.438127    4853 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:38.438135    4853 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:38.438575    4853 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:32:38.603269    4853 main.go:141] libmachine: Creating SSH key...
	I0918 13:32:38.662861    4853 main.go:141] libmachine: Creating Disk image...
	I0918 13:32:38.662868    4853 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:32:38.663050    4853 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2
	I0918 13:32:38.672408    4853 main.go:141] libmachine: STDOUT: 
	I0918 13:32:38.672423    4853 main.go:141] libmachine: STDERR: 
	I0918 13:32:38.672487    4853 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2 +20000M
	I0918 13:32:38.680402    4853 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:32:38.680416    4853 main.go:141] libmachine: STDERR: 
	I0918 13:32:38.680437    4853 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2
	I0918 13:32:38.680443    4853 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:32:38.680454    4853 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:32:38.680485    4853 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:12:c9:e3:21:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2
	I0918 13:32:38.682105    4853 main.go:141] libmachine: STDOUT: 
	I0918 13:32:38.682119    4853 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:32:38.682148    4853 client.go:171] duration metric: took 244.189959ms to LocalClient.Create
	I0918 13:32:40.684311    4853 start.go:128] duration metric: took 2.273106458s to createHost
	I0918 13:32:40.684402    4853 start.go:83] releasing machines lock for "old-k8s-version-718000", held for 2.27326s
	W0918 13:32:40.684457    4853 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:40.695728    4853 out.go:177] * Deleting "old-k8s-version-718000" in qemu2 ...
	W0918 13:32:40.728605    4853 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:40.728635    4853 start.go:729] Will try again in 5 seconds ...
	I0918 13:32:45.730804    4853 start.go:360] acquireMachinesLock for old-k8s-version-718000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:32:45.731254    4853 start.go:364] duration metric: took 352.25µs to acquireMachinesLock for "old-k8s-version-718000"
	I0918 13:32:45.731400    4853 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-718000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-718000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:32:45.731662    4853 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:32:45.744277    4853 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:32:45.797272    4853 start.go:159] libmachine.API.Create for "old-k8s-version-718000" (driver="qemu2")
	I0918 13:32:45.797326    4853 client.go:168] LocalClient.Create starting
	I0918 13:32:45.797451    4853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:32:45.797506    4853 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:45.797525    4853 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:45.797590    4853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:32:45.797635    4853 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:45.797652    4853 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:45.798240    4853 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:32:45.974490    4853 main.go:141] libmachine: Creating SSH key...
	I0918 13:32:46.182330    4853 main.go:141] libmachine: Creating Disk image...
	I0918 13:32:46.182337    4853 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:32:46.182542    4853 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2
	I0918 13:32:46.192771    4853 main.go:141] libmachine: STDOUT: 
	I0918 13:32:46.192789    4853 main.go:141] libmachine: STDERR: 
	I0918 13:32:46.192847    4853 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2 +20000M
	I0918 13:32:46.201041    4853 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:32:46.201055    4853 main.go:141] libmachine: STDERR: 
	I0918 13:32:46.201072    4853 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2
	I0918 13:32:46.201077    4853 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:32:46.201086    4853 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:32:46.201277    4853 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:9c:e1:1b:26:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2
	I0918 13:32:46.204154    4853 main.go:141] libmachine: STDOUT: 
	I0918 13:32:46.204176    4853 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:32:46.204189    4853 client.go:171] duration metric: took 406.867916ms to LocalClient.Create
	I0918 13:32:48.206317    4853 start.go:128] duration metric: took 2.474689834s to createHost
	I0918 13:32:48.206392    4853 start.go:83] releasing machines lock for "old-k8s-version-718000", held for 2.47517725s
	W0918 13:32:48.206769    4853 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-718000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-718000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:48.223377    4853 out.go:201] 
	W0918 13:32:48.227372    4853 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:32:48.227413    4853 out.go:270] * 
	* 
	W0918 13:32:48.229945    4853 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:32:48.245332    4853 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-718000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000: exit status 7 (65.772084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-718000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-718000 create -f testdata/busybox.yaml: exit status 1 (30.056375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-718000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-718000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000: exit status 7 (29.854792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-718000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000: exit status 7 (29.620958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-718000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-718000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-718000 describe deploy/metrics-server -n kube-system: exit status 1 (26.782ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-718000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-718000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000: exit status 7 (30.196125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-718000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-718000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.192453292s)

                                                
                                                
-- stdout --
	* [old-k8s-version-718000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-718000" primary control-plane node in "old-k8s-version-718000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-718000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-718000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:32:51.651869    4912 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:32:51.652000    4912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:51.652003    4912 out.go:358] Setting ErrFile to fd 2...
	I0918 13:32:51.652005    4912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:51.652142    4912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:32:51.653157    4912 out.go:352] Setting JSON to false
	I0918 13:32:51.669011    4912 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3730,"bootTime":1726687841,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:32:51.669115    4912 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:32:51.673230    4912 out.go:177] * [old-k8s-version-718000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:32:51.680100    4912 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:32:51.680132    4912 notify.go:220] Checking for updates...
	I0918 13:32:51.687132    4912 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:32:51.690221    4912 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:32:51.693119    4912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:32:51.696150    4912 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:32:51.699116    4912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:32:51.702435    4912 config.go:182] Loaded profile config "old-k8s-version-718000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0918 13:32:51.706147    4912 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 13:32:51.709098    4912 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:32:51.713114    4912 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:32:51.720100    4912 start.go:297] selected driver: qemu2
	I0918 13:32:51.720107    4912 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-718000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-718000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:32:51.720202    4912 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:32:51.722531    4912 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:32:51.722561    4912 cni.go:84] Creating CNI manager for ""
	I0918 13:32:51.722588    4912 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 13:32:51.722612    4912 start.go:340] cluster config:
	{Name:old-k8s-version-718000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-718000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:32:51.726343    4912 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:51.734124    4912 out.go:177] * Starting "old-k8s-version-718000" primary control-plane node in "old-k8s-version-718000" cluster
	I0918 13:32:51.738097    4912 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 13:32:51.738113    4912 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0918 13:32:51.738122    4912 cache.go:56] Caching tarball of preloaded images
	I0918 13:32:51.738197    4912 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:32:51.738204    4912 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0918 13:32:51.738258    4912 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/old-k8s-version-718000/config.json ...
	I0918 13:32:51.738767    4912 start.go:360] acquireMachinesLock for old-k8s-version-718000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:32:51.738805    4912 start.go:364] duration metric: took 31.458µs to acquireMachinesLock for "old-k8s-version-718000"
	I0918 13:32:51.738815    4912 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:32:51.738824    4912 fix.go:54] fixHost starting: 
	I0918 13:32:51.738960    4912 fix.go:112] recreateIfNeeded on old-k8s-version-718000: state=Stopped err=<nil>
	W0918 13:32:51.738972    4912 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:32:51.742048    4912 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-718000" ...
	I0918 13:32:51.750120    4912 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:32:51.750164    4912 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:9c:e1:1b:26:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2
	I0918 13:32:51.752320    4912 main.go:141] libmachine: STDOUT: 
	I0918 13:32:51.752341    4912 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:32:51.752376    4912 fix.go:56] duration metric: took 13.553542ms for fixHost
	I0918 13:32:51.752380    4912 start.go:83] releasing machines lock for "old-k8s-version-718000", held for 13.570583ms
	W0918 13:32:51.752387    4912 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:32:51.752426    4912 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:51.752431    4912 start.go:729] Will try again in 5 seconds ...
	I0918 13:32:56.754514    4912 start.go:360] acquireMachinesLock for old-k8s-version-718000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:32:56.754960    4912 start.go:364] duration metric: took 295µs to acquireMachinesLock for "old-k8s-version-718000"
	I0918 13:32:56.755060    4912 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:32:56.755079    4912 fix.go:54] fixHost starting: 
	I0918 13:32:56.755881    4912 fix.go:112] recreateIfNeeded on old-k8s-version-718000: state=Stopped err=<nil>
	W0918 13:32:56.755909    4912 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:32:56.765273    4912 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-718000" ...
	I0918 13:32:56.769301    4912 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:32:56.769713    4912 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:9c:e1:1b:26:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/old-k8s-version-718000/disk.qcow2
	I0918 13:32:56.779096    4912 main.go:141] libmachine: STDOUT: 
	I0918 13:32:56.779153    4912 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:32:56.779211    4912 fix.go:56] duration metric: took 24.135042ms for fixHost
	I0918 13:32:56.779225    4912 start.go:83] releasing machines lock for "old-k8s-version-718000", held for 24.241042ms
	W0918 13:32:56.779390    4912 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-718000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-718000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:56.787284    4912 out.go:201] 
	W0918 13:32:56.791160    4912 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:32:56.791183    4912 out.go:270] * 
	* 
	W0918 13:32:56.793779    4912 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:32:56.802253    4912 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-718000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000: exit status 7 (69.28525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-718000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000: exit status 7 (32.717583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-718000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-718000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-718000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.925125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-718000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-718000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000: exit status 7 (30.276833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-718000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000: exit status 7 (30.317167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-718000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-718000 --alsologtostderr -v=1: exit status 83 (39.926333ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-718000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-718000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:32:57.076217    4934 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:32:57.076614    4934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:57.076618    4934 out.go:358] Setting ErrFile to fd 2...
	I0918 13:32:57.076621    4934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:57.076790    4934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:32:57.076997    4934 out.go:352] Setting JSON to false
	I0918 13:32:57.077003    4934 mustload.go:65] Loading cluster: old-k8s-version-718000
	I0918 13:32:57.077215    4934 config.go:182] Loaded profile config "old-k8s-version-718000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0918 13:32:57.078987    4934 out.go:177] * The control-plane node old-k8s-version-718000 host is not running: state=Stopped
	I0918 13:32:57.081931    4934 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-718000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-718000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000: exit status 7 (29.443916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-718000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000: exit status 7 (30.10825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-882000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-882000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.993609917s)

                                                
                                                
-- stdout --
	* [no-preload-882000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-882000" primary control-plane node in "no-preload-882000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-882000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:32:57.389047    4951 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:32:57.389194    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:57.389197    4951 out.go:358] Setting ErrFile to fd 2...
	I0918 13:32:57.389199    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:32:57.389326    4951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:32:57.390380    4951 out.go:352] Setting JSON to false
	I0918 13:32:57.406546    4951 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3736,"bootTime":1726687841,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:32:57.406625    4951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:32:57.410942    4951 out.go:177] * [no-preload-882000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:32:57.417936    4951 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:32:57.417979    4951 notify.go:220] Checking for updates...
	I0918 13:32:57.423849    4951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:32:57.426912    4951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:32:57.429906    4951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:32:57.432907    4951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:32:57.435875    4951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:32:57.439255    4951 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:32:57.439316    4951 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:32:57.439360    4951 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:32:57.443876    4951 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:32:57.450891    4951 start.go:297] selected driver: qemu2
	I0918 13:32:57.450898    4951 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:32:57.450903    4951 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:32:57.453026    4951 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:32:57.456888    4951 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:32:57.459985    4951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:32:57.460003    4951 cni.go:84] Creating CNI manager for ""
	I0918 13:32:57.460026    4951 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:32:57.460031    4951 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:32:57.460067    4951 start.go:340] cluster config:
	{Name:no-preload-882000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-882000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:32:57.463793    4951 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:57.472862    4951 out.go:177] * Starting "no-preload-882000" primary control-plane node in "no-preload-882000" cluster
	I0918 13:32:57.476887    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:32:57.476981    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/no-preload-882000/config.json ...
	I0918 13:32:57.477004    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/no-preload-882000/config.json: {Name:mkd7fd6f0f1433ba38cdbb5985abd7448aae7c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:32:57.477008    4951 cache.go:107] acquiring lock: {Name:mk95c95aa5f8655020adb740f6ca1f706e369006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:57.477032    4951 cache.go:107] acquiring lock: {Name:mk6c1b17057d3e026c9be3b1404a3516f9788591 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:57.477080    4951 cache.go:107] acquiring lock: {Name:mk04eb3341dafcc2fffa0eda22c7026c68df5152 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:57.477004    4951 cache.go:107] acquiring lock: {Name:mk25603e177c3eb96a8e1f7614ffe818c7eb0d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:57.477169    4951 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0918 13:32:57.477199    4951 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 13:32:57.477236    4951 cache.go:107] acquiring lock: {Name:mk550f7944de6688f27b59580d4cf95b1034fcba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:57.477283    4951 cache.go:107] acquiring lock: {Name:mkfc044f80edb3ba8d7cfd56bd182d662cc66155 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:57.477306    4951 cache.go:107] acquiring lock: {Name:mk3d84bcf9402d48eda046261e6d2e54b52916e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:57.477261    4951 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0918 13:32:57.477004    4951 cache.go:107] acquiring lock: {Name:mk94a5eafb1e7f7f4b53543baf43f57f344fb5ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:32:57.477381    4951 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 13:32:57.477403    4951 start.go:360] acquireMachinesLock for no-preload-882000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:32:57.477487    4951 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 13:32:57.477498    4951 start.go:364] duration metric: took 89.5µs to acquireMachinesLock for "no-preload-882000"
	I0918 13:32:57.477504    4951 cache.go:115] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0918 13:32:57.477513    4951 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 511.917µs
	I0918 13:32:57.477526    4951 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0918 13:32:57.477538    4951 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 13:32:57.477514    4951 start.go:93] Provisioning new machine with config: &{Name:no-preload-882000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-882000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:32:57.477589    4951 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:32:57.477635    4951 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 13:32:57.484959    4951 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:32:57.490306    4951 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0918 13:32:57.490332    4951 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0918 13:32:57.490369    4951 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0918 13:32:57.490387    4951 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0918 13:32:57.490533    4951 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0918 13:32:57.491045    4951 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0918 13:32:57.491173    4951 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0918 13:32:57.504519    4951 start.go:159] libmachine.API.Create for "no-preload-882000" (driver="qemu2")
	I0918 13:32:57.504553    4951 client.go:168] LocalClient.Create starting
	I0918 13:32:57.504621    4951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:32:57.504652    4951 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:57.504662    4951 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:57.504700    4951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:32:57.504725    4951 main.go:141] libmachine: Decoding PEM data...
	I0918 13:32:57.504734    4951 main.go:141] libmachine: Parsing certificate...
	I0918 13:32:57.505057    4951 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:32:57.670988    4951 main.go:141] libmachine: Creating SSH key...
	I0918 13:32:57.795495    4951 main.go:141] libmachine: Creating Disk image...
	I0918 13:32:57.795515    4951 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:32:57.795707    4951 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2
	I0918 13:32:57.805092    4951 main.go:141] libmachine: STDOUT: 
	I0918 13:32:57.805106    4951 main.go:141] libmachine: STDERR: 
	I0918 13:32:57.805163    4951 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2 +20000M
	I0918 13:32:57.813391    4951 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:32:57.813406    4951 main.go:141] libmachine: STDERR: 
	I0918 13:32:57.813418    4951 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2
	I0918 13:32:57.813425    4951 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:32:57.813439    4951 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:32:57.813471    4951 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:15:17:e8:89:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2
	I0918 13:32:57.815147    4951 main.go:141] libmachine: STDOUT: 
	I0918 13:32:57.815170    4951 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:32:57.815196    4951 client.go:171] duration metric: took 310.645125ms to LocalClient.Create
	I0918 13:32:57.904157    4951 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0918 13:32:57.904382    4951 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0918 13:32:57.942416    4951 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0918 13:32:57.943739    4951 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0918 13:32:57.945694    4951 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0918 13:32:57.986356    4951 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0918 13:32:57.987499    4951 cache.go:162] opening:  /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0918 13:32:58.068592    4951 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0918 13:32:58.068823    4951 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 591.772125ms
	I0918 13:32:58.068853    4951 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0918 13:32:59.815459    4951 start.go:128] duration metric: took 2.337888416s to createHost
	I0918 13:32:59.815541    4951 start.go:83] releasing machines lock for "no-preload-882000", held for 2.338094584s
	W0918 13:32:59.815581    4951 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:59.834310    4951 out.go:177] * Deleting "no-preload-882000" in qemu2 ...
	W0918 13:32:59.874007    4951 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:32:59.874037    4951 start.go:729] Will try again in 5 seconds ...
	I0918 13:33:01.004574    4951 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0918 13:33:01.004632    4951 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 3.527539167s
	I0918 13:33:01.004669    4951 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0918 13:33:01.071580    4951 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0918 13:33:01.071641    4951 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.594499708s
	I0918 13:33:01.071687    4951 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0918 13:33:01.686097    4951 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0918 13:33:01.686155    4951 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.209267417s
	I0918 13:33:01.686183    4951 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0918 13:33:02.154510    4951 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0918 13:33:02.154578    4951 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.677702458s
	I0918 13:33:02.154609    4951 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0918 13:33:02.439192    4951 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0918 13:33:02.439268    4951 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.962115916s
	I0918 13:33:02.439295    4951 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0918 13:33:04.874127    4951 start.go:360] acquireMachinesLock for no-preload-882000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:04.874575    4951 start.go:364] duration metric: took 353.292µs to acquireMachinesLock for "no-preload-882000"
	I0918 13:33:04.874707    4951 start.go:93] Provisioning new machine with config: &{Name:no-preload-882000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-882000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:33:04.874991    4951 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:33:04.894751    4951 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:33:04.947625    4951 start.go:159] libmachine.API.Create for "no-preload-882000" (driver="qemu2")
	I0918 13:33:04.947671    4951 client.go:168] LocalClient.Create starting
	I0918 13:33:04.947778    4951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:33:04.947844    4951 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:04.947874    4951 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:04.947937    4951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:33:04.947984    4951 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:04.948003    4951 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:04.948517    4951 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:33:05.121744    4951 main.go:141] libmachine: Creating SSH key...
	I0918 13:33:05.280675    4951 main.go:141] libmachine: Creating Disk image...
	I0918 13:33:05.280687    4951 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:33:05.280891    4951 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2
	I0918 13:33:05.290878    4951 main.go:141] libmachine: STDOUT: 
	I0918 13:33:05.290894    4951 main.go:141] libmachine: STDERR: 
	I0918 13:33:05.290960    4951 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2 +20000M
	I0918 13:33:05.299234    4951 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:33:05.299250    4951 main.go:141] libmachine: STDERR: 
	I0918 13:33:05.299263    4951 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2
	I0918 13:33:05.299269    4951 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:33:05.299278    4951 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:05.299320    4951 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:0f:14:81:47:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2
	I0918 13:33:05.301004    4951 main.go:141] libmachine: STDOUT: 
	I0918 13:33:05.301017    4951 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:05.301030    4951 client.go:171] duration metric: took 353.364042ms to LocalClient.Create
	I0918 13:33:05.531338    4951 cache.go:157] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0918 13:33:05.531395    4951 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.054568292s
	I0918 13:33:05.531418    4951 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0918 13:33:05.531493    4951 cache.go:87] Successfully saved all images to host disk.
	I0918 13:33:07.303174    4951 start.go:128] duration metric: took 2.428214833s to createHost
	I0918 13:33:07.303243    4951 start.go:83] releasing machines lock for "no-preload-882000", held for 2.428702292s
	W0918 13:33:07.303616    4951 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-882000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-882000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:07.318280    4951 out.go:201] 
	W0918 13:33:07.323322    4951 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:33:07.323346    4951 out.go:270] * 
	* 
	W0918 13:33:07.325969    4951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:33:07.339059    4951 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-882000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000: exit status 7 (67.233292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-882000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-882000 create -f testdata/busybox.yaml: exit status 1 (28.963375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-882000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-882000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000: exit status 7 (31.119333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-882000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000: exit status 7 (29.935333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-882000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-882000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-882000 describe deploy/metrics-server -n kube-system: exit status 1 (27.001833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-882000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-882000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000: exit status 7 (30.569625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-882000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-882000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.187871s)

                                                
                                                
-- stdout --
	* [no-preload-882000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-882000" primary control-plane node in "no-preload-882000" cluster
	* Restarting existing qemu2 VM for "no-preload-882000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-882000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:33:10.990856    5037 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:33:10.990977    5037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:10.990980    5037 out.go:358] Setting ErrFile to fd 2...
	I0918 13:33:10.990984    5037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:10.991126    5037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:33:10.992168    5037 out.go:352] Setting JSON to false
	I0918 13:33:11.008057    5037 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3749,"bootTime":1726687841,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:33:11.008123    5037 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:33:11.013187    5037 out.go:177] * [no-preload-882000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:33:11.020180    5037 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:33:11.020258    5037 notify.go:220] Checking for updates...
	I0918 13:33:11.027102    5037 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:33:11.030196    5037 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:33:11.033198    5037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:33:11.036160    5037 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:33:11.039169    5037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:33:11.042513    5037 config.go:182] Loaded profile config "no-preload-882000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:11.042777    5037 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:33:11.047112    5037 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:33:11.054147    5037 start.go:297] selected driver: qemu2
	I0918 13:33:11.054154    5037 start.go:901] validating driver "qemu2" against &{Name:no-preload-882000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-882000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:33:11.054208    5037 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:33:11.056539    5037 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:33:11.056565    5037 cni.go:84] Creating CNI manager for ""
	I0918 13:33:11.056585    5037 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:33:11.056614    5037 start.go:340] cluster config:
	{Name:no-preload-882000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-882000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:33:11.060164    5037 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:11.067167    5037 out.go:177] * Starting "no-preload-882000" primary control-plane node in "no-preload-882000" cluster
	I0918 13:33:11.071140    5037 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:33:11.071207    5037 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/no-preload-882000/config.json ...
	I0918 13:33:11.071220    5037 cache.go:107] acquiring lock: {Name:mk94a5eafb1e7f7f4b53543baf43f57f344fb5ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:11.071222    5037 cache.go:107] acquiring lock: {Name:mk25603e177c3eb96a8e1f7614ffe818c7eb0d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:11.071217    5037 cache.go:107] acquiring lock: {Name:mk3d84bcf9402d48eda046261e6d2e54b52916e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:11.071247    5037 cache.go:107] acquiring lock: {Name:mk04eb3341dafcc2fffa0eda22c7026c68df5152 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:11.071281    5037 cache.go:115] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0918 13:33:11.071282    5037 cache.go:115] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0918 13:33:11.071286    5037 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.667µs
	I0918 13:33:11.071287    5037 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 75.834µs
	I0918 13:33:11.071293    5037 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0918 13:33:11.071294    5037 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0918 13:33:11.071300    5037 cache.go:115] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0918 13:33:11.071302    5037 cache.go:107] acquiring lock: {Name:mk6c1b17057d3e026c9be3b1404a3516f9788591 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:11.071305    5037 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 59.291µs
	I0918 13:33:11.071310    5037 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0918 13:33:11.071325    5037 cache.go:115] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0918 13:33:11.071323    5037 cache.go:107] acquiring lock: {Name:mk550f7944de6688f27b59580d4cf95b1034fcba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:11.071329    5037 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 113.459µs
	I0918 13:33:11.071333    5037 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0918 13:33:11.071348    5037 cache.go:107] acquiring lock: {Name:mkfc044f80edb3ba8d7cfd56bd182d662cc66155 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:11.071395    5037 cache.go:115] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0918 13:33:11.071401    5037 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 54.458µs
	I0918 13:33:11.071405    5037 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0918 13:33:11.071410    5037 cache.go:115] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0918 13:33:11.071414    5037 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 113.166µs
	I0918 13:33:11.071417    5037 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0918 13:33:11.071405    5037 cache.go:107] acquiring lock: {Name:mk95c95aa5f8655020adb740f6ca1f706e369006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:11.071426    5037 cache.go:115] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0918 13:33:11.071438    5037 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 113.458µs
	I0918 13:33:11.071444    5037 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0918 13:33:11.071477    5037 cache.go:115] /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0918 13:33:11.071484    5037 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 98.25µs
	I0918 13:33:11.071489    5037 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0918 13:33:11.071493    5037 cache.go:87] Successfully saved all images to host disk.
	I0918 13:33:11.071553    5037 start.go:360] acquireMachinesLock for no-preload-882000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:11.071587    5037 start.go:364] duration metric: took 27.459µs to acquireMachinesLock for "no-preload-882000"
	I0918 13:33:11.071597    5037 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:33:11.071603    5037 fix.go:54] fixHost starting: 
	I0918 13:33:11.071729    5037 fix.go:112] recreateIfNeeded on no-preload-882000: state=Stopped err=<nil>
	W0918 13:33:11.071738    5037 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:33:11.080003    5037 out.go:177] * Restarting existing qemu2 VM for "no-preload-882000" ...
	I0918 13:33:11.084133    5037 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:11.084170    5037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:0f:14:81:47:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2
	I0918 13:33:11.086242    5037 main.go:141] libmachine: STDOUT: 
	I0918 13:33:11.086261    5037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:11.086291    5037 fix.go:56] duration metric: took 14.689125ms for fixHost
	I0918 13:33:11.086296    5037 start.go:83] releasing machines lock for "no-preload-882000", held for 14.704333ms
	W0918 13:33:11.086300    5037 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:33:11.086334    5037 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:11.086338    5037 start.go:729] Will try again in 5 seconds ...
	I0918 13:33:16.088444    5037 start.go:360] acquireMachinesLock for no-preload-882000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:16.089030    5037 start.go:364] duration metric: took 452.75µs to acquireMachinesLock for "no-preload-882000"
	I0918 13:33:16.089211    5037 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:33:16.089236    5037 fix.go:54] fixHost starting: 
	I0918 13:33:16.090023    5037 fix.go:112] recreateIfNeeded on no-preload-882000: state=Stopped err=<nil>
	W0918 13:33:16.090051    5037 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:33:16.099494    5037 out.go:177] * Restarting existing qemu2 VM for "no-preload-882000" ...
	I0918 13:33:16.103554    5037 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:16.103858    5037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:0f:14:81:47:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/no-preload-882000/disk.qcow2
	I0918 13:33:16.113297    5037 main.go:141] libmachine: STDOUT: 
	I0918 13:33:16.113353    5037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:16.113450    5037 fix.go:56] duration metric: took 24.217709ms for fixHost
	I0918 13:33:16.113466    5037 start.go:83] releasing machines lock for "no-preload-882000", held for 24.38725ms
	W0918 13:33:16.113640    5037 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-882000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-882000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:16.122525    5037 out.go:201] 
	W0918 13:33:16.125581    5037 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:33:16.125624    5037 out.go:270] * 
	* 
	W0918 13:33:16.128202    5037 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:33:16.140470    5037 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-882000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000: exit status 7 (68.277416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-882000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000: exit status 7 (32.853333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-882000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-882000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-882000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.073875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-882000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-882000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000: exit status 7 (30.39625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-882000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000: exit status 7 (30.306292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-882000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-882000 --alsologtostderr -v=1: exit status 83 (39.442208ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-882000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-882000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:33:16.408095    5061 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:33:16.408248    5061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:16.408251    5061 out.go:358] Setting ErrFile to fd 2...
	I0918 13:33:16.408254    5061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:16.408405    5061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:33:16.408634    5061 out.go:352] Setting JSON to false
	I0918 13:33:16.408640    5061 mustload.go:65] Loading cluster: no-preload-882000
	I0918 13:33:16.408868    5061 config.go:182] Loaded profile config "no-preload-882000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:16.413034    5061 out.go:177] * The control-plane node no-preload-882000 host is not running: state=Stopped
	I0918 13:33:16.414121    5061 out.go:177]   To start a cluster, run: "minikube start -p no-preload-882000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-882000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000: exit status 7 (29.83075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-882000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000: exit status 7 (29.115416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-882000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-969000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-969000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.915020916s)

                                                
                                                
-- stdout --
	* [embed-certs-969000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-969000" primary control-plane node in "embed-certs-969000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:33:16.727632    5078 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:33:16.727760    5078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:16.727764    5078 out.go:358] Setting ErrFile to fd 2...
	I0918 13:33:16.727766    5078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:16.727896    5078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:33:16.728950    5078 out.go:352] Setting JSON to false
	I0918 13:33:16.745016    5078 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3755,"bootTime":1726687841,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:33:16.745085    5078 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:33:16.750000    5078 out.go:177] * [embed-certs-969000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:33:16.758831    5078 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:33:16.758887    5078 notify.go:220] Checking for updates...
	I0918 13:33:16.766001    5078 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:33:16.768928    5078 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:33:16.771990    5078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:33:16.774993    5078 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:33:16.776465    5078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:33:16.780252    5078 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:16.780314    5078 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:16.780356    5078 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:33:16.784990    5078 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:33:16.790913    5078 start.go:297] selected driver: qemu2
	I0918 13:33:16.790920    5078 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:33:16.790926    5078 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:33:16.793132    5078 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:33:16.796981    5078 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:33:16.798327    5078 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:33:16.798348    5078 cni.go:84] Creating CNI manager for ""
	I0918 13:33:16.798391    5078 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:33:16.798404    5078 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:33:16.798436    5078 start.go:340] cluster config:
	{Name:embed-certs-969000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:33:16.801879    5078 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:16.809968    5078 out.go:177] * Starting "embed-certs-969000" primary control-plane node in "embed-certs-969000" cluster
	I0918 13:33:16.813886    5078 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:33:16.813900    5078 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:33:16.813908    5078 cache.go:56] Caching tarball of preloaded images
	I0918 13:33:16.813969    5078 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:33:16.813981    5078 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:33:16.814053    5078 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/embed-certs-969000/config.json ...
	I0918 13:33:16.814064    5078 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/embed-certs-969000/config.json: {Name:mkb5ce0d2de2541d49e776299362fb1c45f1817c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:33:16.814279    5078 start.go:360] acquireMachinesLock for embed-certs-969000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:16.814313    5078 start.go:364] duration metric: took 28.416µs to acquireMachinesLock for "embed-certs-969000"
	I0918 13:33:16.814328    5078 start.go:93] Provisioning new machine with config: &{Name:embed-certs-969000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:33:16.814354    5078 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:33:16.822918    5078 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:33:16.840582    5078 start.go:159] libmachine.API.Create for "embed-certs-969000" (driver="qemu2")
	I0918 13:33:16.840616    5078 client.go:168] LocalClient.Create starting
	I0918 13:33:16.840690    5078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:33:16.840720    5078 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:16.840735    5078 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:16.840768    5078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:33:16.840791    5078 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:16.840800    5078 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:16.841137    5078 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:33:17.005327    5078 main.go:141] libmachine: Creating SSH key...
	I0918 13:33:17.059533    5078 main.go:141] libmachine: Creating Disk image...
	I0918 13:33:17.059539    5078 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:33:17.059707    5078 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2
	I0918 13:33:17.069267    5078 main.go:141] libmachine: STDOUT: 
	I0918 13:33:17.069284    5078 main.go:141] libmachine: STDERR: 
	I0918 13:33:17.069339    5078 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2 +20000M
	I0918 13:33:17.077327    5078 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:33:17.077349    5078 main.go:141] libmachine: STDERR: 
	I0918 13:33:17.077367    5078 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2
	I0918 13:33:17.077373    5078 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:33:17.077383    5078 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:17.077411    5078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:b1:f5:8e:c4:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2
	I0918 13:33:17.079180    5078 main.go:141] libmachine: STDOUT: 
	I0918 13:33:17.079195    5078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:17.079217    5078 client.go:171] duration metric: took 238.5995ms to LocalClient.Create
	I0918 13:33:19.081336    5078 start.go:128] duration metric: took 2.267019959s to createHost
	I0918 13:33:19.081391    5078 start.go:83] releasing machines lock for "embed-certs-969000", held for 2.267128333s
	W0918 13:33:19.081455    5078 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:19.096810    5078 out.go:177] * Deleting "embed-certs-969000" in qemu2 ...
	W0918 13:33:19.130217    5078 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:19.130252    5078 start.go:729] Will try again in 5 seconds ...
	I0918 13:33:24.132311    5078 start.go:360] acquireMachinesLock for embed-certs-969000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:24.132719    5078 start.go:364] duration metric: took 338.334µs to acquireMachinesLock for "embed-certs-969000"
	I0918 13:33:24.132824    5078 start.go:93] Provisioning new machine with config: &{Name:embed-certs-969000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:33:24.133064    5078 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:33:24.153823    5078 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:33:24.208769    5078 start.go:159] libmachine.API.Create for "embed-certs-969000" (driver="qemu2")
	I0918 13:33:24.208809    5078 client.go:168] LocalClient.Create starting
	I0918 13:33:24.208923    5078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:33:24.208990    5078 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:24.209011    5078 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:24.209070    5078 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:33:24.209118    5078 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:24.209132    5078 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:24.209683    5078 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:33:24.384249    5078 main.go:141] libmachine: Creating SSH key...
	I0918 13:33:24.539095    5078 main.go:141] libmachine: Creating Disk image...
	I0918 13:33:24.539101    5078 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:33:24.539306    5078 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2
	I0918 13:33:24.549158    5078 main.go:141] libmachine: STDOUT: 
	I0918 13:33:24.549184    5078 main.go:141] libmachine: STDERR: 
	I0918 13:33:24.549248    5078 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2 +20000M
	I0918 13:33:24.557224    5078 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:33:24.557239    5078 main.go:141] libmachine: STDERR: 
	I0918 13:33:24.557250    5078 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2
	I0918 13:33:24.557255    5078 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:33:24.557264    5078 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:24.557295    5078 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b2:b6:0d:68:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2
	I0918 13:33:24.558882    5078 main.go:141] libmachine: STDOUT: 
	I0918 13:33:24.558894    5078 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:24.558906    5078 client.go:171] duration metric: took 350.100666ms to LocalClient.Create
	I0918 13:33:26.561031    5078 start.go:128] duration metric: took 2.427977916s to createHost
	I0918 13:33:26.561081    5078 start.go:83] releasing machines lock for "embed-certs-969000", held for 2.428401875s
	W0918 13:33:26.561457    5078 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:26.576225    5078 out.go:201] 
	W0918 13:33:26.580276    5078 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:33:26.580310    5078 out.go:270] * 
	* 
	W0918 13:33:26.582955    5078 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:33:26.600087    5078 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-969000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000: exit status 7 (70.844583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-969000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-969000 create -f testdata/busybox.yaml: exit status 1 (30.563625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-969000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000: exit status 7 (30.143833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000: exit status 7 (30.287667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-969000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-969000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-969000 describe deploy/metrics-server -n kube-system: exit status 1 (27.611375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-969000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000: exit status 7 (30.325417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-969000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-969000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.19597975s)

                                                
                                                
-- stdout --
	* [embed-certs-969000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-969000" primary control-plane node in "embed-certs-969000" cluster
	* Restarting existing qemu2 VM for "embed-certs-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:33:28.805953    5131 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:33:28.806071    5131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:28.806074    5131 out.go:358] Setting ErrFile to fd 2...
	I0918 13:33:28.806076    5131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:28.806219    5131 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:33:28.807233    5131 out.go:352] Setting JSON to false
	I0918 13:33:28.823211    5131 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3767,"bootTime":1726687841,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:33:28.823278    5131 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:33:28.828257    5131 out.go:177] * [embed-certs-969000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:33:28.836423    5131 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:33:28.836476    5131 notify.go:220] Checking for updates...
	I0918 13:33:28.843376    5131 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:33:28.846392    5131 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:33:28.849307    5131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:33:28.852425    5131 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:33:28.863070    5131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:33:28.866584    5131 config.go:182] Loaded profile config "embed-certs-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:28.866874    5131 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:33:28.871350    5131 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:33:28.878331    5131 start.go:297] selected driver: qemu2
	I0918 13:33:28.878338    5131 start.go:901] validating driver "qemu2" against &{Name:embed-certs-969000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:33:28.878429    5131 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:33:28.880951    5131 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:33:28.880977    5131 cni.go:84] Creating CNI manager for ""
	I0918 13:33:28.881004    5131 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:33:28.881024    5131 start.go:340] cluster config:
	{Name:embed-certs-969000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-969000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:33:28.884792    5131 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:28.892379    5131 out.go:177] * Starting "embed-certs-969000" primary control-plane node in "embed-certs-969000" cluster
	I0918 13:33:28.896337    5131 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:33:28.896354    5131 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:33:28.896365    5131 cache.go:56] Caching tarball of preloaded images
	I0918 13:33:28.896444    5131 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:33:28.896450    5131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:33:28.896537    5131 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/embed-certs-969000/config.json ...
	I0918 13:33:28.897123    5131 start.go:360] acquireMachinesLock for embed-certs-969000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:28.897168    5131 start.go:364] duration metric: took 34.5µs to acquireMachinesLock for "embed-certs-969000"
	I0918 13:33:28.897178    5131 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:33:28.897188    5131 fix.go:54] fixHost starting: 
	I0918 13:33:28.897316    5131 fix.go:112] recreateIfNeeded on embed-certs-969000: state=Stopped err=<nil>
	W0918 13:33:28.897328    5131 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:33:28.905308    5131 out.go:177] * Restarting existing qemu2 VM for "embed-certs-969000" ...
	I0918 13:33:28.909402    5131 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:28.909448    5131 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b2:b6:0d:68:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2
	I0918 13:33:28.911664    5131 main.go:141] libmachine: STDOUT: 
	I0918 13:33:28.911686    5131 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:28.911723    5131 fix.go:56] duration metric: took 14.537875ms for fixHost
	I0918 13:33:28.911728    5131 start.go:83] releasing machines lock for "embed-certs-969000", held for 14.554917ms
	W0918 13:33:28.911733    5131 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:33:28.911780    5131 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:28.911785    5131 start.go:729] Will try again in 5 seconds ...
	I0918 13:33:33.913925    5131 start.go:360] acquireMachinesLock for embed-certs-969000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:33.914389    5131 start.go:364] duration metric: took 349.584µs to acquireMachinesLock for "embed-certs-969000"
	I0918 13:33:33.914558    5131 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:33:33.914578    5131 fix.go:54] fixHost starting: 
	I0918 13:33:33.915302    5131 fix.go:112] recreateIfNeeded on embed-certs-969000: state=Stopped err=<nil>
	W0918 13:33:33.915331    5131 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:33:33.919705    5131 out.go:177] * Restarting existing qemu2 VM for "embed-certs-969000" ...
	I0918 13:33:33.927823    5131 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:33.928043    5131 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b2:b6:0d:68:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/embed-certs-969000/disk.qcow2
	I0918 13:33:33.937824    5131 main.go:141] libmachine: STDOUT: 
	I0918 13:33:33.937890    5131 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:33.937990    5131 fix.go:56] duration metric: took 23.411416ms for fixHost
	I0918 13:33:33.938010    5131 start.go:83] releasing machines lock for "embed-certs-969000", held for 23.597458ms
	W0918 13:33:33.938210    5131 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:33.945793    5131 out.go:201] 
	W0918 13:33:33.949751    5131 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:33:33.949783    5131 out.go:270] * 
	* 
	W0918 13:33:33.952531    5131 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:33:33.959725    5131 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-969000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000: exit status 7 (67.726375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-969000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000: exit status 7 (33.036125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-969000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.1495ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000: exit status 7 (30.614416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-969000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000: exit status 7 (30.057666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-969000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-969000 --alsologtostderr -v=1: exit status 83 (42.4005ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-969000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-969000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:33:34.231979    5156 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:33:34.232120    5156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:34.232124    5156 out.go:358] Setting ErrFile to fd 2...
	I0918 13:33:34.232126    5156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:34.232253    5156 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:33:34.232463    5156 out.go:352] Setting JSON to false
	I0918 13:33:34.232469    5156 mustload.go:65] Loading cluster: embed-certs-969000
	I0918 13:33:34.232699    5156 config.go:182] Loaded profile config "embed-certs-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:34.237093    5156 out.go:177] * The control-plane node embed-certs-969000 host is not running: state=Stopped
	I0918 13:33:34.241100    5156 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-969000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-969000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000: exit status 7 (30.300542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000: exit status 7 (30.287917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-826000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-826000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.861903584s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-826000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-826000" primary control-plane node in "default-k8s-diff-port-826000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-826000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:33:34.648635    5180 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:33:34.648748    5180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:34.648753    5180 out.go:358] Setting ErrFile to fd 2...
	I0918 13:33:34.648756    5180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:34.648879    5180 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:33:34.649961    5180 out.go:352] Setting JSON to false
	I0918 13:33:34.666193    5180 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3773,"bootTime":1726687841,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:33:34.666258    5180 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:33:34.670160    5180 out.go:177] * [default-k8s-diff-port-826000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:33:34.677035    5180 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:33:34.677063    5180 notify.go:220] Checking for updates...
	I0918 13:33:34.681617    5180 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:33:34.684986    5180 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:33:34.688084    5180 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:33:34.689696    5180 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:33:34.693032    5180 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:33:34.696444    5180 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:34.696505    5180 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:34.696548    5180 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:33:34.700909    5180 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:33:34.708042    5180 start.go:297] selected driver: qemu2
	I0918 13:33:34.708049    5180 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:33:34.708055    5180 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:33:34.710161    5180 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:33:34.713086    5180 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:33:34.716204    5180 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:33:34.716228    5180 cni.go:84] Creating CNI manager for ""
	I0918 13:33:34.716283    5180 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:33:34.716295    5180 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:33:34.716325    5180 start.go:340] cluster config:
	{Name:default-k8s-diff-port-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-826000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:33:34.719645    5180 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:34.725986    5180 out.go:177] * Starting "default-k8s-diff-port-826000" primary control-plane node in "default-k8s-diff-port-826000" cluster
	I0918 13:33:34.730013    5180 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:33:34.730029    5180 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:33:34.730037    5180 cache.go:56] Caching tarball of preloaded images
	I0918 13:33:34.730108    5180 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:33:34.730113    5180 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:33:34.730181    5180 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/default-k8s-diff-port-826000/config.json ...
	I0918 13:33:34.730193    5180 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/default-k8s-diff-port-826000/config.json: {Name:mk643dd2fae507090671dba11e6f576a256e226d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:33:34.730414    5180 start.go:360] acquireMachinesLock for default-k8s-diff-port-826000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:34.730448    5180 start.go:364] duration metric: took 26.791µs to acquireMachinesLock for "default-k8s-diff-port-826000"
	I0918 13:33:34.730459    5180 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-826000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:33:34.730481    5180 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:33:34.738049    5180 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:33:34.755391    5180 start.go:159] libmachine.API.Create for "default-k8s-diff-port-826000" (driver="qemu2")
	I0918 13:33:34.755425    5180 client.go:168] LocalClient.Create starting
	I0918 13:33:34.755495    5180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:33:34.755527    5180 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:34.755536    5180 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:34.755582    5180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:33:34.755605    5180 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:34.755611    5180 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:34.755961    5180 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:33:34.921433    5180 main.go:141] libmachine: Creating SSH key...
	I0918 13:33:35.027371    5180 main.go:141] libmachine: Creating Disk image...
	I0918 13:33:35.027377    5180 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:33:35.027552    5180 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2
	I0918 13:33:35.037096    5180 main.go:141] libmachine: STDOUT: 
	I0918 13:33:35.037118    5180 main.go:141] libmachine: STDERR: 
	I0918 13:33:35.037170    5180 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2 +20000M
	I0918 13:33:35.045031    5180 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:33:35.045047    5180 main.go:141] libmachine: STDERR: 
	I0918 13:33:35.045071    5180 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2
	I0918 13:33:35.045077    5180 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:33:35.045089    5180 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:35.045116    5180 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:87:06:10:82:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2
	I0918 13:33:35.046706    5180 main.go:141] libmachine: STDOUT: 
	I0918 13:33:35.046722    5180 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:35.046744    5180 client.go:171] duration metric: took 291.319541ms to LocalClient.Create
	I0918 13:33:37.048917    5180 start.go:128] duration metric: took 2.318467s to createHost
	I0918 13:33:37.049058    5180 start.go:83] releasing machines lock for "default-k8s-diff-port-826000", held for 2.31865825s
	W0918 13:33:37.049111    5180 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:37.060926    5180 out.go:177] * Deleting "default-k8s-diff-port-826000" in qemu2 ...
	W0918 13:33:37.096587    5180 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:37.096618    5180 start.go:729] Will try again in 5 seconds ...
	I0918 13:33:42.098831    5180 start.go:360] acquireMachinesLock for default-k8s-diff-port-826000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:42.099388    5180 start.go:364] duration metric: took 444.583µs to acquireMachinesLock for "default-k8s-diff-port-826000"
	I0918 13:33:42.099529    5180 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-826000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:33:42.099820    5180 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:33:42.120727    5180 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:33:42.173387    5180 start.go:159] libmachine.API.Create for "default-k8s-diff-port-826000" (driver="qemu2")
	I0918 13:33:42.173440    5180 client.go:168] LocalClient.Create starting
	I0918 13:33:42.173558    5180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:33:42.173631    5180 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:42.173650    5180 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:42.173720    5180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:33:42.173765    5180 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:42.173778    5180 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:42.174334    5180 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:33:42.347734    5180 main.go:141] libmachine: Creating SSH key...
	I0918 13:33:42.416472    5180 main.go:141] libmachine: Creating Disk image...
	I0918 13:33:42.416477    5180 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:33:42.416658    5180 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2
	I0918 13:33:42.426096    5180 main.go:141] libmachine: STDOUT: 
	I0918 13:33:42.426114    5180 main.go:141] libmachine: STDERR: 
	I0918 13:33:42.426169    5180 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2 +20000M
	I0918 13:33:42.434232    5180 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:33:42.434250    5180 main.go:141] libmachine: STDERR: 
	I0918 13:33:42.434265    5180 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2
	I0918 13:33:42.434270    5180 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:33:42.434277    5180 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:42.434304    5180 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ad:15:2f:fc:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2
	I0918 13:33:42.435881    5180 main.go:141] libmachine: STDOUT: 
	I0918 13:33:42.435895    5180 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:42.435913    5180 client.go:171] duration metric: took 262.474416ms to LocalClient.Create
	I0918 13:33:44.438059    5180 start.go:128] duration metric: took 2.338234583s to createHost
	I0918 13:33:44.438160    5180 start.go:83] releasing machines lock for "default-k8s-diff-port-826000", held for 2.338807333s
	W0918 13:33:44.438627    5180 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-826000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:44.453224    5180 out.go:201] 
	W0918 13:33:44.457341    5180 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:33:44.457369    5180 out.go:270] * 
	* 
	W0918 13:33:44.459894    5180 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:33:44.469162    5180 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-826000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000: exit status 7 (68.281166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-826000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-826000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-826000 create -f testdata/busybox.yaml: exit status 1 (30.216834ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-826000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-826000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000: exit status 7 (30.624708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-826000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000: exit status 7 (30.434375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-826000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-826000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-826000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-826000 describe deploy/metrics-server -n kube-system: exit status 1 (27.087875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-826000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-826000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000: exit status 7 (30.80225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-826000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-826000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-826000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.186380708s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-826000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-826000" primary control-plane node in "default-k8s-diff-port-826000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-826000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-826000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:33:48.833583    5241 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:33:48.833714    5241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:48.833717    5241 out.go:358] Setting ErrFile to fd 2...
	I0918 13:33:48.833721    5241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:48.833857    5241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:33:48.834857    5241 out.go:352] Setting JSON to false
	I0918 13:33:48.850903    5241 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3787,"bootTime":1726687841,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:33:48.850972    5241 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:33:48.855598    5241 out.go:177] * [default-k8s-diff-port-826000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:33:48.862627    5241 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:33:48.862666    5241 notify.go:220] Checking for updates...
	I0918 13:33:48.869530    5241 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:33:48.872569    5241 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:33:48.875599    5241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:33:48.878624    5241 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:33:48.881593    5241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:33:48.884834    5241 config.go:182] Loaded profile config "default-k8s-diff-port-826000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:48.885085    5241 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:33:48.889539    5241 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:33:48.896617    5241 start.go:297] selected driver: qemu2
	I0918 13:33:48.896624    5241 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-826000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:33:48.896676    5241 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:33:48.899048    5241 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:33:48.899073    5241 cni.go:84] Creating CNI manager for ""
	I0918 13:33:48.899097    5241 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:33:48.899132    5241 start.go:340] cluster config:
	{Name:default-k8s-diff-port-826000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-826000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:33:48.902771    5241 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:48.910523    5241 out.go:177] * Starting "default-k8s-diff-port-826000" primary control-plane node in "default-k8s-diff-port-826000" cluster
	I0918 13:33:48.914548    5241 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:33:48.914563    5241 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:33:48.914572    5241 cache.go:56] Caching tarball of preloaded images
	I0918 13:33:48.914632    5241 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:33:48.914638    5241 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:33:48.914702    5241 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/default-k8s-diff-port-826000/config.json ...
	I0918 13:33:48.915180    5241 start.go:360] acquireMachinesLock for default-k8s-diff-port-826000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:48.915212    5241 start.go:364] duration metric: took 24.958µs to acquireMachinesLock for "default-k8s-diff-port-826000"
	I0918 13:33:48.915221    5241 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:33:48.915229    5241 fix.go:54] fixHost starting: 
	I0918 13:33:48.915367    5241 fix.go:112] recreateIfNeeded on default-k8s-diff-port-826000: state=Stopped err=<nil>
	W0918 13:33:48.915378    5241 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:33:48.919585    5241 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-826000" ...
	I0918 13:33:48.927564    5241 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:48.927604    5241 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ad:15:2f:fc:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2
	I0918 13:33:48.929658    5241 main.go:141] libmachine: STDOUT: 
	I0918 13:33:48.929682    5241 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:48.929714    5241 fix.go:56] duration metric: took 14.486417ms for fixHost
	I0918 13:33:48.929719    5241 start.go:83] releasing machines lock for "default-k8s-diff-port-826000", held for 14.502375ms
	W0918 13:33:48.929724    5241 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:33:48.929764    5241 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:48.929769    5241 start.go:729] Will try again in 5 seconds ...
	I0918 13:33:53.931890    5241 start.go:360] acquireMachinesLock for default-k8s-diff-port-826000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:53.932264    5241 start.go:364] duration metric: took 291.833µs to acquireMachinesLock for "default-k8s-diff-port-826000"
	I0918 13:33:53.932401    5241 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:33:53.932420    5241 fix.go:54] fixHost starting: 
	I0918 13:33:53.933100    5241 fix.go:112] recreateIfNeeded on default-k8s-diff-port-826000: state=Stopped err=<nil>
	W0918 13:33:53.933126    5241 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:33:53.942440    5241 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-826000" ...
	I0918 13:33:53.946440    5241 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:53.946740    5241 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ad:15:2f:fc:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/default-k8s-diff-port-826000/disk.qcow2
	I0918 13:33:53.955652    5241 main.go:141] libmachine: STDOUT: 
	I0918 13:33:53.955707    5241 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:53.955778    5241 fix.go:56] duration metric: took 23.356791ms for fixHost
	I0918 13:33:53.955791    5241 start.go:83] releasing machines lock for "default-k8s-diff-port-826000", held for 23.503041ms
	W0918 13:33:53.955983    5241 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-826000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-826000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:53.963387    5241 out.go:201] 
	W0918 13:33:53.967552    5241 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:33:53.967584    5241 out.go:270] * 
	* 
	W0918 13:33:53.969960    5241 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:33:53.977470    5241 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-826000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000: exit status 7 (68.240875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-826000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-826000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000: exit status 7 (32.747458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-826000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-826000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-826000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-826000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.553875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-826000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-826000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000: exit status 7 (29.754334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-826000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-826000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000: exit status 7 (30.262209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-826000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-826000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-826000 --alsologtostderr -v=1: exit status 83 (41.473125ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-826000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-826000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:33:54.248175    5260 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:33:54.248334    5260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:54.248338    5260 out.go:358] Setting ErrFile to fd 2...
	I0918 13:33:54.248340    5260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:54.248474    5260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:33:54.248676    5260 out.go:352] Setting JSON to false
	I0918 13:33:54.248683    5260 mustload.go:65] Loading cluster: default-k8s-diff-port-826000
	I0918 13:33:54.248888    5260 config.go:182] Loaded profile config "default-k8s-diff-port-826000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:54.252829    5260 out.go:177] * The control-plane node default-k8s-diff-port-826000 host is not running: state=Stopped
	I0918 13:33:54.255744    5260 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-826000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-826000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000: exit status 7 (29.81025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-826000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000: exit status 7 (30.141125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-826000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-717000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-717000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.965692375s)

                                                
                                                
-- stdout --
	* [newest-cni-717000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-717000" primary control-plane node in "newest-cni-717000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-717000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:33:54.562683    5277 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:33:54.562813    5277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:54.562817    5277 out.go:358] Setting ErrFile to fd 2...
	I0918 13:33:54.562819    5277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:33:54.562942    5277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:33:54.564088    5277 out.go:352] Setting JSON to false
	I0918 13:33:54.580205    5277 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3793,"bootTime":1726687841,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:33:54.580279    5277 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:33:54.584842    5277 out.go:177] * [newest-cni-717000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:33:54.591779    5277 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:33:54.591825    5277 notify.go:220] Checking for updates...
	I0918 13:33:54.595755    5277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:33:54.598775    5277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:33:54.601824    5277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:33:54.604831    5277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:33:54.607759    5277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:33:54.611084    5277 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:54.611146    5277 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:33:54.611204    5277 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:33:54.615702    5277 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:33:54.622746    5277 start.go:297] selected driver: qemu2
	I0918 13:33:54.622754    5277 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:33:54.622760    5277 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:33:54.624945    5277 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0918 13:33:54.624981    5277 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0918 13:33:54.632732    5277 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:33:54.635891    5277 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0918 13:33:54.635911    5277 cni.go:84] Creating CNI manager for ""
	I0918 13:33:54.635944    5277 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:33:54.635950    5277 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:33:54.635990    5277 start.go:340] cluster config:
	{Name:newest-cni-717000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:33:54.639754    5277 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:33:54.647765    5277 out.go:177] * Starting "newest-cni-717000" primary control-plane node in "newest-cni-717000" cluster
	I0918 13:33:54.650688    5277 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:33:54.650706    5277 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:33:54.650719    5277 cache.go:56] Caching tarball of preloaded images
	I0918 13:33:54.650797    5277 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:33:54.650803    5277 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:33:54.650866    5277 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/newest-cni-717000/config.json ...
	I0918 13:33:54.650878    5277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/newest-cni-717000/config.json: {Name:mk79a2c62c9054e140b3eb945b6e70b1715ac6c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:33:54.651112    5277 start.go:360] acquireMachinesLock for newest-cni-717000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:33:54.651150    5277 start.go:364] duration metric: took 31.167µs to acquireMachinesLock for "newest-cni-717000"
	I0918 13:33:54.651162    5277 start.go:93] Provisioning new machine with config: &{Name:newest-cni-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:33:54.651199    5277 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:33:54.658702    5277 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:33:54.677510    5277 start.go:159] libmachine.API.Create for "newest-cni-717000" (driver="qemu2")
	I0918 13:33:54.677539    5277 client.go:168] LocalClient.Create starting
	I0918 13:33:54.677604    5277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:33:54.677635    5277 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:54.677646    5277 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:54.677689    5277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:33:54.677714    5277 main.go:141] libmachine: Decoding PEM data...
	I0918 13:33:54.677722    5277 main.go:141] libmachine: Parsing certificate...
	I0918 13:33:54.678092    5277 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:33:54.841888    5277 main.go:141] libmachine: Creating SSH key...
	I0918 13:33:55.039821    5277 main.go:141] libmachine: Creating Disk image...
	I0918 13:33:55.039828    5277 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:33:55.040035    5277 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2
	I0918 13:33:55.049929    5277 main.go:141] libmachine: STDOUT: 
	I0918 13:33:55.049953    5277 main.go:141] libmachine: STDERR: 
	I0918 13:33:55.050019    5277 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2 +20000M
	I0918 13:33:55.058027    5277 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:33:55.058143    5277 main.go:141] libmachine: STDERR: 
	I0918 13:33:55.058166    5277 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2
	I0918 13:33:55.058172    5277 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:33:55.058187    5277 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:33:55.058213    5277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:d6:38:16:f2:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2
	I0918 13:33:55.059846    5277 main.go:141] libmachine: STDOUT: 
	I0918 13:33:55.059860    5277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:33:55.059881    5277 client.go:171] duration metric: took 382.3465ms to LocalClient.Create
	I0918 13:33:57.062078    5277 start.go:128] duration metric: took 2.410918208s to createHost
	I0918 13:33:57.062132    5277 start.go:83] releasing machines lock for "newest-cni-717000", held for 2.411035041s
	W0918 13:33:57.062181    5277 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:57.079479    5277 out.go:177] * Deleting "newest-cni-717000" in qemu2 ...
	W0918 13:33:57.120218    5277 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:33:57.120240    5277 start.go:729] Will try again in 5 seconds ...
	I0918 13:34:02.122465    5277 start.go:360] acquireMachinesLock for newest-cni-717000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:34:02.122966    5277 start.go:364] duration metric: took 385.167µs to acquireMachinesLock for "newest-cni-717000"
	I0918 13:34:02.123113    5277 start.go:93] Provisioning new machine with config: &{Name:newest-cni-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:34:02.123567    5277 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:34:02.144339    5277 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0918 13:34:02.197379    5277 start.go:159] libmachine.API.Create for "newest-cni-717000" (driver="qemu2")
	I0918 13:34:02.197427    5277 client.go:168] LocalClient.Create starting
	I0918 13:34:02.197550    5277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:34:02.197615    5277 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:02.197631    5277 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:02.197695    5277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:34:02.197739    5277 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:02.197754    5277 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:02.198497    5277 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:34:02.372688    5277 main.go:141] libmachine: Creating SSH key...
	I0918 13:34:02.424288    5277 main.go:141] libmachine: Creating Disk image...
	I0918 13:34:02.424293    5277 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:34:02.424459    5277 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2
	I0918 13:34:02.434138    5277 main.go:141] libmachine: STDOUT: 
	I0918 13:34:02.434153    5277 main.go:141] libmachine: STDERR: 
	I0918 13:34:02.434210    5277 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2 +20000M
	I0918 13:34:02.442421    5277 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:34:02.442436    5277 main.go:141] libmachine: STDERR: 
	I0918 13:34:02.442448    5277 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2
	I0918 13:34:02.442453    5277 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:34:02.442463    5277 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:34:02.442490    5277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:56:4d:88:e7:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2
	I0918 13:34:02.444183    5277 main.go:141] libmachine: STDOUT: 
	I0918 13:34:02.444200    5277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:34:02.444216    5277 client.go:171] duration metric: took 246.785875ms to LocalClient.Create
	I0918 13:34:04.446347    5277 start.go:128] duration metric: took 2.322802709s to createHost
	I0918 13:34:04.446418    5277 start.go:83] releasing machines lock for "newest-cni-717000", held for 2.323488042s
	W0918 13:34:04.446850    5277 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:04.463614    5277 out.go:201] 
	W0918 13:34:04.468646    5277 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:34:04.468674    5277 out.go:270] * 
	* 
	W0918 13:34:04.471403    5277 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:34:04.487377    5277 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-717000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000: exit status 7 (68.07025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-717000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-717000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.186206458s)

                                                
                                                
-- stdout --
	* [newest-cni-717000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-717000" primary control-plane node in "newest-cni-717000" cluster
	* Restarting existing qemu2 VM for "newest-cni-717000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-717000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:34:08.396114    5319 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:34:08.396222    5319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:08.396226    5319 out.go:358] Setting ErrFile to fd 2...
	I0918 13:34:08.396228    5319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:08.396354    5319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:34:08.397375    5319 out.go:352] Setting JSON to false
	I0918 13:34:08.413309    5319 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3807,"bootTime":1726687841,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:34:08.413378    5319 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:34:08.418352    5319 out.go:177] * [newest-cni-717000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:34:08.426283    5319 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:34:08.426361    5319 notify.go:220] Checking for updates...
	I0918 13:34:08.434278    5319 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:34:08.437329    5319 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:34:08.440280    5319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:34:08.443316    5319 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:34:08.446302    5319 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:34:08.449653    5319 config.go:182] Loaded profile config "newest-cni-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:34:08.449917    5319 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:34:08.453227    5319 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:34:08.460289    5319 start.go:297] selected driver: qemu2
	I0918 13:34:08.460295    5319 start.go:901] validating driver "qemu2" against &{Name:newest-cni-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-717000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:34:08.460350    5319 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:34:08.462626    5319 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0918 13:34:08.462652    5319 cni.go:84] Creating CNI manager for ""
	I0918 13:34:08.462676    5319 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:34:08.462699    5319 start.go:340] cluster config:
	{Name:newest-cni-717000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-717000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:34:08.466194    5319 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:34:08.474266    5319 out.go:177] * Starting "newest-cni-717000" primary control-plane node in "newest-cni-717000" cluster
	I0918 13:34:08.478168    5319 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:34:08.478185    5319 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:34:08.478194    5319 cache.go:56] Caching tarball of preloaded images
	I0918 13:34:08.478260    5319 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:34:08.478266    5319 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:34:08.478330    5319 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/newest-cni-717000/config.json ...
	I0918 13:34:08.478803    5319 start.go:360] acquireMachinesLock for newest-cni-717000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:34:08.478830    5319 start.go:364] duration metric: took 22.083µs to acquireMachinesLock for "newest-cni-717000"
	I0918 13:34:08.478838    5319 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:34:08.478844    5319 fix.go:54] fixHost starting: 
	I0918 13:34:08.478961    5319 fix.go:112] recreateIfNeeded on newest-cni-717000: state=Stopped err=<nil>
	W0918 13:34:08.478970    5319 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:34:08.483325    5319 out.go:177] * Restarting existing qemu2 VM for "newest-cni-717000" ...
	I0918 13:34:08.491266    5319 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:34:08.491297    5319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:56:4d:88:e7:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2
	I0918 13:34:08.493261    5319 main.go:141] libmachine: STDOUT: 
	I0918 13:34:08.493278    5319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:34:08.493309    5319 fix.go:56] duration metric: took 14.465084ms for fixHost
	I0918 13:34:08.493313    5319 start.go:83] releasing machines lock for "newest-cni-717000", held for 14.479625ms
	W0918 13:34:08.493319    5319 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:34:08.493359    5319 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:08.493363    5319 start.go:729] Will try again in 5 seconds ...
	I0918 13:34:13.495514    5319 start.go:360] acquireMachinesLock for newest-cni-717000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:34:13.495891    5319 start.go:364] duration metric: took 272.625µs to acquireMachinesLock for "newest-cni-717000"
	I0918 13:34:13.496003    5319 start.go:96] Skipping create...Using existing machine configuration
	I0918 13:34:13.496022    5319 fix.go:54] fixHost starting: 
	I0918 13:34:13.496705    5319 fix.go:112] recreateIfNeeded on newest-cni-717000: state=Stopped err=<nil>
	W0918 13:34:13.496734    5319 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 13:34:13.505899    5319 out.go:177] * Restarting existing qemu2 VM for "newest-cni-717000" ...
	I0918 13:34:13.510027    5319 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:34:13.510280    5319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:56:4d:88:e7:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/newest-cni-717000/disk.qcow2
	I0918 13:34:13.518953    5319 main.go:141] libmachine: STDOUT: 
	I0918 13:34:13.519006    5319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:34:13.519079    5319 fix.go:56] duration metric: took 23.055542ms for fixHost
	I0918 13:34:13.519096    5319 start.go:83] releasing machines lock for "newest-cni-717000", held for 23.181334ms
	W0918 13:34:13.519259    5319 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-717000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-717000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:13.525938    5319 out.go:201] 
	W0918 13:34:13.530097    5319 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:34:13.530123    5319 out.go:270] * 
	* 
	W0918 13:34:13.532838    5319 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:34:13.540017    5319 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-717000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000: exit status 7 (68.370333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-717000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000: exit status 7 (30.355792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-717000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-717000 --alsologtostderr -v=1: exit status 83 (41.103666ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-717000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-717000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:34:13.723395    5333 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:34:13.723581    5333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:13.723584    5333 out.go:358] Setting ErrFile to fd 2...
	I0918 13:34:13.723586    5333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:13.723710    5333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:34:13.723950    5333 out.go:352] Setting JSON to false
	I0918 13:34:13.723956    5333 mustload.go:65] Loading cluster: newest-cni-717000
	I0918 13:34:13.724174    5333 config.go:182] Loaded profile config "newest-cni-717000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:34:13.728234    5333 out.go:177] * The control-plane node newest-cni-717000 host is not running: state=Stopped
	I0918 13:34:13.732221    5333 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-717000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-717000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000: exit status 7 (29.747459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-717000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000: exit status 7 (30.1335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.966017292s)

                                                
                                                
-- stdout --
	* [auto-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-838000" primary control-plane node in "auto-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:34:14.039765    5350 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:34:14.039890    5350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:14.039893    5350 out.go:358] Setting ErrFile to fd 2...
	I0918 13:34:14.039896    5350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:14.040053    5350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:34:14.041129    5350 out.go:352] Setting JSON to false
	I0918 13:34:14.057409    5350 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3813,"bootTime":1726687841,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:34:14.057479    5350 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:34:14.062332    5350 out.go:177] * [auto-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:34:14.069166    5350 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:34:14.069230    5350 notify.go:220] Checking for updates...
	I0918 13:34:14.075203    5350 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:34:14.078157    5350 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:34:14.081200    5350 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:34:14.084251    5350 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:34:14.087225    5350 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:34:14.090562    5350 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:34:14.090619    5350 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:34:14.090667    5350 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:34:14.095199    5350 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:34:14.102134    5350 start.go:297] selected driver: qemu2
	I0918 13:34:14.102141    5350 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:34:14.102157    5350 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:34:14.104595    5350 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:34:14.107265    5350 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:34:14.110258    5350 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:34:14.110276    5350 cni.go:84] Creating CNI manager for ""
	I0918 13:34:14.110306    5350 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 13:34:14.110315    5350 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:34:14.110342    5350 start.go:340] cluster config:
	{Name:auto-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:34:14.114082    5350 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:34:14.122213    5350 out.go:177] * Starting "auto-838000" primary control-plane node in "auto-838000" cluster
	I0918 13:34:14.126152    5350 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:34:14.126166    5350 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:34:14.126173    5350 cache.go:56] Caching tarball of preloaded images
	I0918 13:34:14.126229    5350 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:34:14.126235    5350 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:34:14.126287    5350 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/auto-838000/config.json ...
	I0918 13:34:14.126298    5350 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/auto-838000/config.json: {Name:mk80bc05b797260e17e7ce659e4176b1835c3452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:34:14.126519    5350 start.go:360] acquireMachinesLock for auto-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:34:14.126554    5350 start.go:364] duration metric: took 28.541µs to acquireMachinesLock for "auto-838000"
	I0918 13:34:14.126564    5350 start.go:93] Provisioning new machine with config: &{Name:auto-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:34:14.126588    5350 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:34:14.134183    5350 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:34:14.153244    5350 start.go:159] libmachine.API.Create for "auto-838000" (driver="qemu2")
	I0918 13:34:14.153282    5350 client.go:168] LocalClient.Create starting
	I0918 13:34:14.153343    5350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:34:14.153375    5350 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:14.153384    5350 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:14.153420    5350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:34:14.153445    5350 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:14.153455    5350 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:14.153784    5350 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:34:14.333382    5350 main.go:141] libmachine: Creating SSH key...
	I0918 13:34:14.410644    5350 main.go:141] libmachine: Creating Disk image...
	I0918 13:34:14.410649    5350 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:34:14.410829    5350 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2
	I0918 13:34:14.420386    5350 main.go:141] libmachine: STDOUT: 
	I0918 13:34:14.420402    5350 main.go:141] libmachine: STDERR: 
	I0918 13:34:14.420465    5350 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2 +20000M
	I0918 13:34:14.428395    5350 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:34:14.428409    5350 main.go:141] libmachine: STDERR: 
	I0918 13:34:14.428423    5350 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2
	I0918 13:34:14.428432    5350 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:34:14.428443    5350 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:34:14.428476    5350 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:9f:b4:61:56:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2
	I0918 13:34:14.430104    5350 main.go:141] libmachine: STDOUT: 
	I0918 13:34:14.430116    5350 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:34:14.430136    5350 client.go:171] duration metric: took 276.854541ms to LocalClient.Create
	I0918 13:34:16.432281    5350 start.go:128] duration metric: took 2.305729041s to createHost
	I0918 13:34:16.432343    5350 start.go:83] releasing machines lock for "auto-838000", held for 2.30583625s
	W0918 13:34:16.432414    5350 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:16.440574    5350 out.go:177] * Deleting "auto-838000" in qemu2 ...
	W0918 13:34:16.479482    5350 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:16.479505    5350 start.go:729] Will try again in 5 seconds ...
	I0918 13:34:21.480665    5350 start.go:360] acquireMachinesLock for auto-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:34:21.481132    5350 start.go:364] duration metric: took 376.75µs to acquireMachinesLock for "auto-838000"
	I0918 13:34:21.481246    5350 start.go:93] Provisioning new machine with config: &{Name:auto-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:34:21.481493    5350 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:34:21.499887    5350 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:34:21.550838    5350 start.go:159] libmachine.API.Create for "auto-838000" (driver="qemu2")
	I0918 13:34:21.550886    5350 client.go:168] LocalClient.Create starting
	I0918 13:34:21.551006    5350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:34:21.551075    5350 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:21.551093    5350 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:21.551157    5350 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:34:21.551203    5350 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:21.551221    5350 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:21.551732    5350 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:34:21.725606    5350 main.go:141] libmachine: Creating SSH key...
	I0918 13:34:21.908595    5350 main.go:141] libmachine: Creating Disk image...
	I0918 13:34:21.908601    5350 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:34:21.908805    5350 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2
	I0918 13:34:21.918708    5350 main.go:141] libmachine: STDOUT: 
	I0918 13:34:21.918725    5350 main.go:141] libmachine: STDERR: 
	I0918 13:34:21.918783    5350 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2 +20000M
	I0918 13:34:21.926858    5350 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:34:21.926872    5350 main.go:141] libmachine: STDERR: 
	I0918 13:34:21.926886    5350 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2
	I0918 13:34:21.926893    5350 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:34:21.926902    5350 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:34:21.926941    5350 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:60:fd:15:62:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/auto-838000/disk.qcow2
	I0918 13:34:21.928562    5350 main.go:141] libmachine: STDOUT: 
	I0918 13:34:21.928582    5350 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:34:21.928595    5350 client.go:171] duration metric: took 377.712833ms to LocalClient.Create
	I0918 13:34:23.930730    5350 start.go:128] duration metric: took 2.4492605s to createHost
	I0918 13:34:23.930795    5350 start.go:83] releasing machines lock for "auto-838000", held for 2.449700834s
	W0918 13:34:23.931184    5350 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:23.948089    5350 out.go:201] 
	W0918 13:34:23.952752    5350 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:34:23.952774    5350 out.go:270] * 
	* 
	W0918 13:34:23.954288    5350 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:34:23.965683    5350 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
E0918 13:34:27.266736    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.841285708s)

                                                
                                                
-- stdout --
	* [calico-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-838000" primary control-plane node in "calico-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:34:26.122232    5459 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:34:26.122349    5459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:26.122352    5459 out.go:358] Setting ErrFile to fd 2...
	I0918 13:34:26.122355    5459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:26.122485    5459 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:34:26.123614    5459 out.go:352] Setting JSON to false
	I0918 13:34:26.139667    5459 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3825,"bootTime":1726687841,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:34:26.139741    5459 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:34:26.146163    5459 out.go:177] * [calico-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:34:26.153955    5459 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:34:26.154010    5459 notify.go:220] Checking for updates...
	I0918 13:34:26.158898    5459 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:34:26.161891    5459 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:34:26.164971    5459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:34:26.167861    5459 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:34:26.170887    5459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:34:26.174208    5459 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:34:26.174272    5459 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:34:26.174317    5459 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:34:26.178922    5459 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:34:26.185890    5459 start.go:297] selected driver: qemu2
	I0918 13:34:26.185896    5459 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:34:26.185903    5459 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:34:26.188121    5459 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:34:26.190888    5459 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:34:26.194069    5459 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:34:26.194086    5459 cni.go:84] Creating CNI manager for "calico"
	I0918 13:34:26.194089    5459 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0918 13:34:26.194124    5459 start.go:340] cluster config:
	{Name:calico-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:34:26.197680    5459 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:34:26.203865    5459 out.go:177] * Starting "calico-838000" primary control-plane node in "calico-838000" cluster
	I0918 13:34:26.207847    5459 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:34:26.207860    5459 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:34:26.207867    5459 cache.go:56] Caching tarball of preloaded images
	I0918 13:34:26.207920    5459 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:34:26.207926    5459 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:34:26.207988    5459 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/calico-838000/config.json ...
	I0918 13:34:26.208000    5459 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/calico-838000/config.json: {Name:mkd6eb351e5696baea38392a1b62aad13bbd317e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:34:26.208213    5459 start.go:360] acquireMachinesLock for calico-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:34:26.208247    5459 start.go:364] duration metric: took 28.625µs to acquireMachinesLock for "calico-838000"
	I0918 13:34:26.208257    5459 start.go:93] Provisioning new machine with config: &{Name:calico-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:34:26.208282    5459 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:34:26.216879    5459 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:34:26.235048    5459 start.go:159] libmachine.API.Create for "calico-838000" (driver="qemu2")
	I0918 13:34:26.235073    5459 client.go:168] LocalClient.Create starting
	I0918 13:34:26.235136    5459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:34:26.235163    5459 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:26.235173    5459 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:26.235207    5459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:34:26.235230    5459 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:26.235237    5459 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:26.235585    5459 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:34:26.399216    5459 main.go:141] libmachine: Creating SSH key...
	I0918 13:34:26.473984    5459 main.go:141] libmachine: Creating Disk image...
	I0918 13:34:26.473990    5459 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:34:26.474175    5459 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2
	I0918 13:34:26.483482    5459 main.go:141] libmachine: STDOUT: 
	I0918 13:34:26.483502    5459 main.go:141] libmachine: STDERR: 
	I0918 13:34:26.483556    5459 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2 +20000M
	I0918 13:34:26.491494    5459 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:34:26.491510    5459 main.go:141] libmachine: STDERR: 
	I0918 13:34:26.491533    5459 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2
	I0918 13:34:26.491538    5459 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:34:26.491549    5459 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:34:26.491578    5459 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:5c:83:d0:15:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2
	I0918 13:34:26.493212    5459 main.go:141] libmachine: STDOUT: 
	I0918 13:34:26.493225    5459 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:34:26.493244    5459 client.go:171] duration metric: took 258.170916ms to LocalClient.Create
	I0918 13:34:28.495370    5459 start.go:128] duration metric: took 2.287121916s to createHost
	I0918 13:34:28.495458    5459 start.go:83] releasing machines lock for "calico-838000", held for 2.287261834s
	W0918 13:34:28.495500    5459 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:28.511028    5459 out.go:177] * Deleting "calico-838000" in qemu2 ...
	W0918 13:34:28.544940    5459 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:28.544963    5459 start.go:729] Will try again in 5 seconds ...
	I0918 13:34:33.547097    5459 start.go:360] acquireMachinesLock for calico-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:34:33.547687    5459 start.go:364] duration metric: took 376.459µs to acquireMachinesLock for "calico-838000"
	I0918 13:34:33.547819    5459 start.go:93] Provisioning new machine with config: &{Name:calico-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:34:33.548152    5459 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:34:33.552922    5459 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:34:33.604913    5459 start.go:159] libmachine.API.Create for "calico-838000" (driver="qemu2")
	I0918 13:34:33.604967    5459 client.go:168] LocalClient.Create starting
	I0918 13:34:33.605099    5459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:34:33.605170    5459 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:33.605185    5459 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:33.605245    5459 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:34:33.605291    5459 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:33.605327    5459 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:33.605873    5459 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:34:33.788846    5459 main.go:141] libmachine: Creating SSH key...
	I0918 13:34:33.859781    5459 main.go:141] libmachine: Creating Disk image...
	I0918 13:34:33.859786    5459 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:34:33.859966    5459 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2
	I0918 13:34:33.869668    5459 main.go:141] libmachine: STDOUT: 
	I0918 13:34:33.869686    5459 main.go:141] libmachine: STDERR: 
	I0918 13:34:33.869752    5459 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2 +20000M
	I0918 13:34:33.877798    5459 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:34:33.877822    5459 main.go:141] libmachine: STDERR: 
	I0918 13:34:33.877834    5459 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2
	I0918 13:34:33.877839    5459 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:34:33.877846    5459 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:34:33.877882    5459 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:30:73:0a:f6:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/calico-838000/disk.qcow2
	I0918 13:34:33.879567    5459 main.go:141] libmachine: STDOUT: 
	I0918 13:34:33.879627    5459 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:34:33.879641    5459 client.go:171] duration metric: took 274.6765ms to LocalClient.Create
	I0918 13:34:35.881766    5459 start.go:128] duration metric: took 2.333631167s to createHost
	I0918 13:34:35.881842    5459 start.go:83] releasing machines lock for "calico-838000", held for 2.334187167s
	W0918 13:34:35.882141    5459 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:35.897881    5459 out.go:201] 
	W0918 13:34:35.900943    5459 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:34:35.901006    5459 out.go:270] * 
	* 
	W0918 13:34:35.903533    5459 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:34:35.920798    5459 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.923637625s)

                                                
                                                
-- stdout --
	* [custom-flannel-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-838000" primary control-plane node in "custom-flannel-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:34:38.311099    5582 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:34:38.311233    5582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:38.311237    5582 out.go:358] Setting ErrFile to fd 2...
	I0918 13:34:38.311239    5582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:38.311386    5582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:34:38.312479    5582 out.go:352] Setting JSON to false
	I0918 13:34:38.328525    5582 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3837,"bootTime":1726687841,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:34:38.328592    5582 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:34:38.333078    5582 out.go:177] * [custom-flannel-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:34:38.342036    5582 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:34:38.342101    5582 notify.go:220] Checking for updates...
	I0918 13:34:38.347975    5582 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:34:38.350896    5582 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:34:38.353820    5582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:34:38.356837    5582 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:34:38.359909    5582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:34:38.361849    5582 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:34:38.361919    5582 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:34:38.361974    5582 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:34:38.365906    5582 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:34:38.372729    5582 start.go:297] selected driver: qemu2
	I0918 13:34:38.372735    5582 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:34:38.372741    5582 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:34:38.375020    5582 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:34:38.377911    5582 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:34:38.380998    5582 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:34:38.381015    5582 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0918 13:34:38.381029    5582 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0918 13:34:38.381059    5582 start.go:340] cluster config:
	{Name:custom-flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:34:38.384693    5582 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:34:38.391891    5582 out.go:177] * Starting "custom-flannel-838000" primary control-plane node in "custom-flannel-838000" cluster
	I0918 13:34:38.395884    5582 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:34:38.395897    5582 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:34:38.395904    5582 cache.go:56] Caching tarball of preloaded images
	I0918 13:34:38.395956    5582 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:34:38.395961    5582 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:34:38.396013    5582 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/custom-flannel-838000/config.json ...
	I0918 13:34:38.396023    5582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/custom-flannel-838000/config.json: {Name:mke4e0f41f64ffbff049da9df34cbc9acad3bce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:34:38.396235    5582 start.go:360] acquireMachinesLock for custom-flannel-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:34:38.396273    5582 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "custom-flannel-838000"
	I0918 13:34:38.396283    5582 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:34:38.396307    5582 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:34:38.404894    5582 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:34:38.422439    5582 start.go:159] libmachine.API.Create for "custom-flannel-838000" (driver="qemu2")
	I0918 13:34:38.422466    5582 client.go:168] LocalClient.Create starting
	I0918 13:34:38.422525    5582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:34:38.422556    5582 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:38.422565    5582 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:38.422603    5582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:34:38.422626    5582 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:38.422634    5582 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:38.422975    5582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:34:38.587832    5582 main.go:141] libmachine: Creating SSH key...
	I0918 13:34:38.660369    5582 main.go:141] libmachine: Creating Disk image...
	I0918 13:34:38.660374    5582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:34:38.660552    5582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0918 13:34:38.670153    5582 main.go:141] libmachine: STDOUT: 
	I0918 13:34:38.670250    5582 main.go:141] libmachine: STDERR: 
	I0918 13:34:38.670310    5582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2 +20000M
	I0918 13:34:38.678245    5582 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:34:38.678304    5582 main.go:141] libmachine: STDERR: 
	I0918 13:34:38.678322    5582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0918 13:34:38.678326    5582 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:34:38.678338    5582 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:34:38.678370    5582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:5b:9f:9f:68:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0918 13:34:38.679992    5582 main.go:141] libmachine: STDOUT: 
	I0918 13:34:38.680043    5582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:34:38.680063    5582 client.go:171] duration metric: took 257.5985ms to LocalClient.Create
	I0918 13:34:40.682230    5582 start.go:128] duration metric: took 2.28595125s to createHost
	I0918 13:34:40.682341    5582 start.go:83] releasing machines lock for "custom-flannel-838000", held for 2.2861185s
	W0918 13:34:40.682432    5582 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:40.703771    5582 out.go:177] * Deleting "custom-flannel-838000" in qemu2 ...
	W0918 13:34:40.737331    5582 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:40.737359    5582 start.go:729] Will try again in 5 seconds ...
	I0918 13:34:45.739493    5582 start.go:360] acquireMachinesLock for custom-flannel-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:34:45.740025    5582 start.go:364] duration metric: took 430.042µs to acquireMachinesLock for "custom-flannel-838000"
	I0918 13:34:45.740164    5582 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:34:45.740477    5582 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:34:45.759980    5582 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:34:45.811783    5582 start.go:159] libmachine.API.Create for "custom-flannel-838000" (driver="qemu2")
	I0918 13:34:45.811843    5582 client.go:168] LocalClient.Create starting
	I0918 13:34:45.811977    5582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:34:45.812059    5582 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:45.812078    5582 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:45.812145    5582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:34:45.812192    5582 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:45.812202    5582 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:45.812793    5582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:34:45.989788    5582 main.go:141] libmachine: Creating SSH key...
	I0918 13:34:46.132297    5582 main.go:141] libmachine: Creating Disk image...
	I0918 13:34:46.132304    5582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:34:46.132505    5582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0918 13:34:46.142469    5582 main.go:141] libmachine: STDOUT: 
	I0918 13:34:46.142488    5582 main.go:141] libmachine: STDERR: 
	I0918 13:34:46.142538    5582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2 +20000M
	I0918 13:34:46.150544    5582 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:34:46.150561    5582 main.go:141] libmachine: STDERR: 
	I0918 13:34:46.150580    5582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0918 13:34:46.150585    5582 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:34:46.150594    5582 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:34:46.150620    5582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:c9:41:14:af:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0918 13:34:46.152222    5582 main.go:141] libmachine: STDOUT: 
	I0918 13:34:46.152236    5582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:34:46.152247    5582 client.go:171] duration metric: took 340.407166ms to LocalClient.Create
	I0918 13:34:48.154368    5582 start.go:128] duration metric: took 2.413907834s to createHost
	I0918 13:34:48.154427    5582 start.go:83] releasing machines lock for "custom-flannel-838000", held for 2.414436s
	W0918 13:34:48.154857    5582 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:48.170460    5582 out.go:201] 
	W0918 13:34:48.174653    5582 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:34:48.174680    5582 out.go:270] * 
	* 
	W0918 13:34:48.177518    5582 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:34:48.193517    5582 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (10.162122375s)

                                                
                                                
-- stdout --
	* [false-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-838000" primary control-plane node in "false-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:34:50.611152    5706 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:34:50.611287    5706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:50.611291    5706 out.go:358] Setting ErrFile to fd 2...
	I0918 13:34:50.611293    5706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:34:50.611429    5706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:34:50.612494    5706 out.go:352] Setting JSON to false
	I0918 13:34:50.628736    5706 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3849,"bootTime":1726687841,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:34:50.628812    5706 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:34:50.635803    5706 out.go:177] * [false-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:34:50.644675    5706 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:34:50.644708    5706 notify.go:220] Checking for updates...
	I0918 13:34:50.649577    5706 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:34:50.652619    5706 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:34:50.655647    5706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:34:50.658564    5706 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:34:50.661597    5706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:34:50.664933    5706 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:34:50.665001    5706 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:34:50.665046    5706 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:34:50.669560    5706 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:34:50.676625    5706 start.go:297] selected driver: qemu2
	I0918 13:34:50.676632    5706 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:34:50.676638    5706 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:34:50.679055    5706 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:34:50.682640    5706 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:34:50.685667    5706 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:34:50.685686    5706 cni.go:84] Creating CNI manager for "false"
	I0918 13:34:50.685718    5706 start.go:340] cluster config:
	{Name:false-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:34:50.689500    5706 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:34:50.695639    5706 out.go:177] * Starting "false-838000" primary control-plane node in "false-838000" cluster
	I0918 13:34:50.699553    5706 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:34:50.699567    5706 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:34:50.699586    5706 cache.go:56] Caching tarball of preloaded images
	I0918 13:34:50.699651    5706 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:34:50.699657    5706 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:34:50.699719    5706 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/false-838000/config.json ...
	I0918 13:34:50.699732    5706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/false-838000/config.json: {Name:mkf84d4d4f91b1a9d9d3138a4b5955ec00e065f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:34:50.700179    5706 start.go:360] acquireMachinesLock for false-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:34:50.700222    5706 start.go:364] duration metric: took 34.625µs to acquireMachinesLock for "false-838000"
	I0918 13:34:50.700235    5706 start.go:93] Provisioning new machine with config: &{Name:false-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:34:50.700268    5706 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:34:50.707630    5706 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:34:50.727050    5706 start.go:159] libmachine.API.Create for "false-838000" (driver="qemu2")
	I0918 13:34:50.727087    5706 client.go:168] LocalClient.Create starting
	I0918 13:34:50.727150    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:34:50.727183    5706 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:50.727193    5706 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:50.727231    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:34:50.727255    5706 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:50.727262    5706 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:50.727616    5706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:34:50.892924    5706 main.go:141] libmachine: Creating SSH key...
	I0918 13:34:51.002569    5706 main.go:141] libmachine: Creating Disk image...
	I0918 13:34:51.002575    5706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:34:51.002743    5706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2
	I0918 13:34:51.012339    5706 main.go:141] libmachine: STDOUT: 
	I0918 13:34:51.012360    5706 main.go:141] libmachine: STDERR: 
	I0918 13:34:51.012414    5706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2 +20000M
	I0918 13:34:51.020399    5706 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:34:51.020415    5706 main.go:141] libmachine: STDERR: 
	I0918 13:34:51.020427    5706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2
	I0918 13:34:51.020431    5706 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:34:51.020441    5706 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:34:51.020468    5706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:9d:e5:b9:02:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2
	I0918 13:34:51.022128    5706 main.go:141] libmachine: STDOUT: 
	I0918 13:34:51.022144    5706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:34:51.022174    5706 client.go:171] duration metric: took 295.088375ms to LocalClient.Create
	I0918 13:34:53.024329    5706 start.go:128] duration metric: took 2.32409525s to createHost
	I0918 13:34:53.024390    5706 start.go:83] releasing machines lock for "false-838000", held for 2.324218625s
	W0918 13:34:53.024433    5706 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:53.038680    5706 out.go:177] * Deleting "false-838000" in qemu2 ...
	W0918 13:34:53.076452    5706 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:34:53.076467    5706 start.go:729] Will try again in 5 seconds ...
	I0918 13:34:58.078649    5706 start.go:360] acquireMachinesLock for false-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:34:58.079156    5706 start.go:364] duration metric: took 408.459µs to acquireMachinesLock for "false-838000"
	I0918 13:34:58.079281    5706 start.go:93] Provisioning new machine with config: &{Name:false-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:34:58.079578    5706 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:34:58.100341    5706 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:34:58.153796    5706 start.go:159] libmachine.API.Create for "false-838000" (driver="qemu2")
	I0918 13:34:58.153839    5706 client.go:168] LocalClient.Create starting
	I0918 13:34:58.153969    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:34:58.154043    5706 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:58.154059    5706 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:58.154126    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:34:58.154176    5706 main.go:141] libmachine: Decoding PEM data...
	I0918 13:34:58.154192    5706 main.go:141] libmachine: Parsing certificate...
	I0918 13:34:58.154887    5706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:34:58.327476    5706 main.go:141] libmachine: Creating SSH key...
	I0918 13:34:58.670551    5706 main.go:141] libmachine: Creating Disk image...
	I0918 13:34:58.670560    5706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:34:58.670770    5706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2
	I0918 13:34:58.680610    5706 main.go:141] libmachine: STDOUT: 
	I0918 13:34:58.680627    5706 main.go:141] libmachine: STDERR: 
	I0918 13:34:58.680686    5706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2 +20000M
	I0918 13:34:58.688848    5706 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:34:58.688863    5706 main.go:141] libmachine: STDERR: 
	I0918 13:34:58.688877    5706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2
	I0918 13:34:58.688883    5706 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:34:58.688891    5706 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:34:58.688924    5706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:d7:52:77:17:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/false-838000/disk.qcow2
	I0918 13:34:58.690597    5706 main.go:141] libmachine: STDOUT: 
	I0918 13:34:58.690610    5706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:34:58.690623    5706 client.go:171] duration metric: took 536.791791ms to LocalClient.Create
	I0918 13:35:00.692790    5706 start.go:128] duration metric: took 2.613226667s to createHost
	I0918 13:35:00.692846    5706 start.go:83] releasing machines lock for "false-838000", held for 2.613736834s
	W0918 13:35:00.693210    5706 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:00.703792    5706 out.go:201] 
	W0918 13:35:00.715719    5706 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:35:00.715749    5706 out.go:270] * 
	* 
	W0918 13:35:00.718037    5706 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:35:00.729825    5706 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.846068875s)

                                                
                                                
-- stdout --
	* [kindnet-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-838000" primary control-plane node in "kindnet-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:35:02.962314    5815 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:35:02.962447    5815 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:35:02.962450    5815 out.go:358] Setting ErrFile to fd 2...
	I0918 13:35:02.962453    5815 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:35:02.962579    5815 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:35:02.963761    5815 out.go:352] Setting JSON to false
	I0918 13:35:02.979929    5815 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3861,"bootTime":1726687841,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:35:02.979998    5815 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:35:02.986478    5815 out.go:177] * [kindnet-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:35:02.994440    5815 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:35:02.994506    5815 notify.go:220] Checking for updates...
	I0918 13:35:02.999377    5815 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:35:03.002433    5815 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:35:03.005422    5815 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:35:03.008345    5815 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:35:03.011368    5815 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:35:03.014794    5815 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:35:03.014861    5815 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:35:03.014917    5815 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:35:03.019376    5815 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:35:03.026381    5815 start.go:297] selected driver: qemu2
	I0918 13:35:03.026387    5815 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:35:03.026394    5815 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:35:03.028825    5815 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:35:03.031388    5815 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:35:03.034509    5815 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:35:03.034538    5815 cni.go:84] Creating CNI manager for "kindnet"
	I0918 13:35:03.034549    5815 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 13:35:03.034580    5815 start.go:340] cluster config:
	{Name:kindnet-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:35:03.038444    5815 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:35:03.045365    5815 out.go:177] * Starting "kindnet-838000" primary control-plane node in "kindnet-838000" cluster
	I0918 13:35:03.049420    5815 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:35:03.049438    5815 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:35:03.049452    5815 cache.go:56] Caching tarball of preloaded images
	I0918 13:35:03.049523    5815 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:35:03.049529    5815 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:35:03.049606    5815 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/kindnet-838000/config.json ...
	I0918 13:35:03.049623    5815 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/kindnet-838000/config.json: {Name:mkf72a01eac56540b1623b7bcfcf81373e2ddb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:35:03.049846    5815 start.go:360] acquireMachinesLock for kindnet-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:35:03.049880    5815 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "kindnet-838000"
	I0918 13:35:03.049891    5815 start.go:93] Provisioning new machine with config: &{Name:kindnet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:35:03.049916    5815 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:35:03.058380    5815 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:35:03.077092    5815 start.go:159] libmachine.API.Create for "kindnet-838000" (driver="qemu2")
	I0918 13:35:03.077130    5815 client.go:168] LocalClient.Create starting
	I0918 13:35:03.077200    5815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:35:03.077238    5815 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:03.077250    5815 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:03.077294    5815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:35:03.077319    5815 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:03.077330    5815 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:03.077715    5815 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:35:03.242515    5815 main.go:141] libmachine: Creating SSH key...
	I0918 13:35:03.293116    5815 main.go:141] libmachine: Creating Disk image...
	I0918 13:35:03.293124    5815 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:35:03.293291    5815 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2
	I0918 13:35:03.302697    5815 main.go:141] libmachine: STDOUT: 
	I0918 13:35:03.302723    5815 main.go:141] libmachine: STDERR: 
	I0918 13:35:03.302786    5815 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2 +20000M
	I0918 13:35:03.310784    5815 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:35:03.310799    5815 main.go:141] libmachine: STDERR: 
	I0918 13:35:03.310836    5815 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2
	I0918 13:35:03.310841    5815 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:35:03.310854    5815 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:35:03.310882    5815 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:f5:36:c0:e8:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2
	I0918 13:35:03.312493    5815 main.go:141] libmachine: STDOUT: 
	I0918 13:35:03.312506    5815 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:35:03.312527    5815 client.go:171] duration metric: took 235.396416ms to LocalClient.Create
	I0918 13:35:05.314679    5815 start.go:128] duration metric: took 2.26480225s to createHost
	I0918 13:35:05.314734    5815 start.go:83] releasing machines lock for "kindnet-838000", held for 2.264903917s
	W0918 13:35:05.314791    5815 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:05.330117    5815 out.go:177] * Deleting "kindnet-838000" in qemu2 ...
	W0918 13:35:05.367458    5815 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:05.367479    5815 start.go:729] Will try again in 5 seconds ...
	I0918 13:35:10.369619    5815 start.go:360] acquireMachinesLock for kindnet-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:35:10.370127    5815 start.go:364] duration metric: took 395.416µs to acquireMachinesLock for "kindnet-838000"
	I0918 13:35:10.370748    5815 start.go:93] Provisioning new machine with config: &{Name:kindnet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:35:10.371137    5815 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:35:10.392061    5815 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:35:10.446252    5815 start.go:159] libmachine.API.Create for "kindnet-838000" (driver="qemu2")
	I0918 13:35:10.446307    5815 client.go:168] LocalClient.Create starting
	I0918 13:35:10.446437    5815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:35:10.446505    5815 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:10.446524    5815 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:10.446582    5815 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:35:10.446627    5815 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:10.446639    5815 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:10.447187    5815 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:35:10.619477    5815 main.go:141] libmachine: Creating SSH key...
	I0918 13:35:10.706363    5815 main.go:141] libmachine: Creating Disk image...
	I0918 13:35:10.706368    5815 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:35:10.706546    5815 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2
	I0918 13:35:10.715648    5815 main.go:141] libmachine: STDOUT: 
	I0918 13:35:10.715665    5815 main.go:141] libmachine: STDERR: 
	I0918 13:35:10.715717    5815 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2 +20000M
	I0918 13:35:10.723657    5815 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:35:10.723672    5815 main.go:141] libmachine: STDERR: 
	I0918 13:35:10.723684    5815 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2
	I0918 13:35:10.723689    5815 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:35:10.723701    5815 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:35:10.723731    5815 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:d2:a6:f0:ae:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kindnet-838000/disk.qcow2
	I0918 13:35:10.725449    5815 main.go:141] libmachine: STDOUT: 
	I0918 13:35:10.725464    5815 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:35:10.725477    5815 client.go:171] duration metric: took 279.171083ms to LocalClient.Create
	I0918 13:35:12.727603    5815 start.go:128] duration metric: took 2.356498125s to createHost
	I0918 13:35:12.727663    5815 start.go:83] releasing machines lock for "kindnet-838000", held for 2.357572959s
	W0918 13:35:12.728022    5815 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:12.743619    5815 out.go:201] 
	W0918 13:35:12.748767    5815 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:35:12.748804    5815 out.go:270] * 
	* 
	W0918 13:35:12.751458    5815 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:35:12.764658    5815 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.063005833s)

                                                
                                                
-- stdout --
	* [flannel-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-838000" primary control-plane node in "flannel-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:35:15.091353    5929 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:35:15.091471    5929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:35:15.091474    5929 out.go:358] Setting ErrFile to fd 2...
	I0918 13:35:15.091486    5929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:35:15.091625    5929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:35:15.092682    5929 out.go:352] Setting JSON to false
	I0918 13:35:15.108577    5929 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3874,"bootTime":1726687841,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:35:15.108667    5929 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:35:15.115534    5929 out.go:177] * [flannel-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:35:15.121486    5929 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:35:15.121527    5929 notify.go:220] Checking for updates...
	I0918 13:35:15.127441    5929 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:35:15.130472    5929 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:35:15.133445    5929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:35:15.136411    5929 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:35:15.139415    5929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:35:15.142855    5929 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:35:15.142922    5929 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:35:15.142967    5929 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:35:15.146454    5929 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:35:15.153442    5929 start.go:297] selected driver: qemu2
	I0918 13:35:15.153448    5929 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:35:15.153454    5929 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:35:15.155684    5929 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:35:15.157287    5929 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:35:15.160468    5929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:35:15.160484    5929 cni.go:84] Creating CNI manager for "flannel"
	I0918 13:35:15.160489    5929 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0918 13:35:15.160517    5929 start.go:340] cluster config:
	{Name:flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:35:15.164243    5929 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:35:15.171423    5929 out.go:177] * Starting "flannel-838000" primary control-plane node in "flannel-838000" cluster
	I0918 13:35:15.175382    5929 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:35:15.175396    5929 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:35:15.175407    5929 cache.go:56] Caching tarball of preloaded images
	I0918 13:35:15.175469    5929 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:35:15.175475    5929 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:35:15.175533    5929 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/flannel-838000/config.json ...
	I0918 13:35:15.175543    5929 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/flannel-838000/config.json: {Name:mk73a3bc593fecf337072ec3e8222c700046fcb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:35:15.175758    5929 start.go:360] acquireMachinesLock for flannel-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:35:15.175792    5929 start.go:364] duration metric: took 28.417µs to acquireMachinesLock for "flannel-838000"
	I0918 13:35:15.175803    5929 start.go:93] Provisioning new machine with config: &{Name:flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:35:15.175829    5929 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:35:15.184436    5929 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:35:15.202897    5929 start.go:159] libmachine.API.Create for "flannel-838000" (driver="qemu2")
	I0918 13:35:15.203009    5929 client.go:168] LocalClient.Create starting
	I0918 13:35:15.203069    5929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:35:15.203107    5929 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:15.203117    5929 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:15.203152    5929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:35:15.203176    5929 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:15.203184    5929 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:15.203610    5929 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:35:15.369713    5929 main.go:141] libmachine: Creating SSH key...
	I0918 13:35:15.543076    5929 main.go:141] libmachine: Creating Disk image...
	I0918 13:35:15.543087    5929 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:35:15.543288    5929 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2
	I0918 13:35:15.552909    5929 main.go:141] libmachine: STDOUT: 
	I0918 13:35:15.552931    5929 main.go:141] libmachine: STDERR: 
	I0918 13:35:15.552991    5929 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2 +20000M
	I0918 13:35:15.560908    5929 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:35:15.560923    5929 main.go:141] libmachine: STDERR: 
	I0918 13:35:15.560947    5929 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2
	I0918 13:35:15.560952    5929 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:35:15.560962    5929 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:35:15.560990    5929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:e9:1c:cf:de:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2
	I0918 13:35:15.562551    5929 main.go:141] libmachine: STDOUT: 
	I0918 13:35:15.562566    5929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:35:15.562586    5929 client.go:171] duration metric: took 359.580125ms to LocalClient.Create
	I0918 13:35:17.564707    5929 start.go:128] duration metric: took 2.388914875s to createHost
	I0918 13:35:17.564765    5929 start.go:83] releasing machines lock for "flannel-838000", held for 2.389026291s
	W0918 13:35:17.564868    5929 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:17.581268    5929 out.go:177] * Deleting "flannel-838000" in qemu2 ...
	W0918 13:35:17.614554    5929 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:17.614581    5929 start.go:729] Will try again in 5 seconds ...
	I0918 13:35:22.616657    5929 start.go:360] acquireMachinesLock for flannel-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:35:22.617223    5929 start.go:364] duration metric: took 448.667µs to acquireMachinesLock for "flannel-838000"
	I0918 13:35:22.617350    5929 start.go:93] Provisioning new machine with config: &{Name:flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:35:22.617675    5929 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:35:22.637495    5929 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:35:22.690753    5929 start.go:159] libmachine.API.Create for "flannel-838000" (driver="qemu2")
	I0918 13:35:22.690811    5929 client.go:168] LocalClient.Create starting
	I0918 13:35:22.690928    5929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:35:22.691001    5929 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:22.691018    5929 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:22.691084    5929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:35:22.691128    5929 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:22.691138    5929 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:22.691825    5929 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:35:22.865860    5929 main.go:141] libmachine: Creating SSH key...
	I0918 13:35:23.048847    5929 main.go:141] libmachine: Creating Disk image...
	I0918 13:35:23.048856    5929 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:35:23.049023    5929 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2
	I0918 13:35:23.058824    5929 main.go:141] libmachine: STDOUT: 
	I0918 13:35:23.058849    5929 main.go:141] libmachine: STDERR: 
	I0918 13:35:23.058910    5929 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2 +20000M
	I0918 13:35:23.066831    5929 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:35:23.066845    5929 main.go:141] libmachine: STDERR: 
	I0918 13:35:23.066858    5929 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2
	I0918 13:35:23.066870    5929 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:35:23.066876    5929 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:35:23.066916    5929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:1a:27:99:bf:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/flannel-838000/disk.qcow2
	I0918 13:35:23.068488    5929 main.go:141] libmachine: STDOUT: 
	I0918 13:35:23.068506    5929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:35:23.068519    5929 client.go:171] duration metric: took 377.712541ms to LocalClient.Create
	I0918 13:35:25.070635    5929 start.go:128] duration metric: took 2.4529925s to createHost
	I0918 13:35:25.070684    5929 start.go:83] releasing machines lock for "flannel-838000", held for 2.453499583s
	W0918 13:35:25.071044    5929 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:25.086988    5929 out.go:201] 
	W0918 13:35:25.091823    5929 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:35:25.091846    5929 out.go:270] * 
	* 
	W0918 13:35:25.094413    5929 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:35:25.111712    5929 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.082356083s)

                                                
                                                
-- stdout --
	* [enable-default-cni-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-838000" primary control-plane node in "enable-default-cni-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:35:27.513245    6048 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:35:27.513360    6048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:35:27.513364    6048 out.go:358] Setting ErrFile to fd 2...
	I0918 13:35:27.513366    6048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:35:27.513519    6048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:35:27.514540    6048 out.go:352] Setting JSON to false
	I0918 13:35:27.530510    6048 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3886,"bootTime":1726687841,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:35:27.530578    6048 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:35:27.537341    6048 out.go:177] * [enable-default-cni-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:35:27.546244    6048 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:35:27.546293    6048 notify.go:220] Checking for updates...
	I0918 13:35:27.553206    6048 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:35:27.556221    6048 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:35:27.557804    6048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:35:27.561233    6048 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:35:27.564198    6048 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:35:27.567613    6048 config.go:182] Loaded profile config "cert-expiration-319000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:35:27.567683    6048 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:35:27.567720    6048 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:35:27.572199    6048 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:35:27.579172    6048 start.go:297] selected driver: qemu2
	I0918 13:35:27.579179    6048 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:35:27.579185    6048 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:35:27.581430    6048 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:35:27.585360    6048 out.go:177] * Automatically selected the socket_vmnet network
	E0918 13:35:27.588270    6048 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0918 13:35:27.588282    6048 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:35:27.588297    6048 cni.go:84] Creating CNI manager for "bridge"
	I0918 13:35:27.588309    6048 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:35:27.588348    6048 start.go:340] cluster config:
	{Name:enable-default-cni-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:35:27.591770    6048 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:35:27.599253    6048 out.go:177] * Starting "enable-default-cni-838000" primary control-plane node in "enable-default-cni-838000" cluster
	I0918 13:35:27.603187    6048 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:35:27.603203    6048 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:35:27.603212    6048 cache.go:56] Caching tarball of preloaded images
	I0918 13:35:27.603279    6048 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:35:27.603284    6048 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:35:27.603348    6048 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/enable-default-cni-838000/config.json ...
	I0918 13:35:27.603359    6048 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/enable-default-cni-838000/config.json: {Name:mk930f91c9131ff85821d0c67780f51eb3ba72c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:35:27.603571    6048 start.go:360] acquireMachinesLock for enable-default-cni-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:35:27.603605    6048 start.go:364] duration metric: took 28.5µs to acquireMachinesLock for "enable-default-cni-838000"
	I0918 13:35:27.603616    6048 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:35:27.603644    6048 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:35:27.612210    6048 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:35:27.630252    6048 start.go:159] libmachine.API.Create for "enable-default-cni-838000" (driver="qemu2")
	I0918 13:35:27.630290    6048 client.go:168] LocalClient.Create starting
	I0918 13:35:27.630351    6048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:35:27.630383    6048 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:27.630393    6048 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:27.630428    6048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:35:27.630456    6048 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:27.630463    6048 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:27.630838    6048 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:35:27.795856    6048 main.go:141] libmachine: Creating SSH key...
	I0918 13:35:27.869713    6048 main.go:141] libmachine: Creating Disk image...
	I0918 13:35:27.869719    6048 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:35:27.869889    6048 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0918 13:35:27.879126    6048 main.go:141] libmachine: STDOUT: 
	I0918 13:35:27.879141    6048 main.go:141] libmachine: STDERR: 
	I0918 13:35:27.879206    6048 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2 +20000M
	I0918 13:35:27.887098    6048 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:35:27.887110    6048 main.go:141] libmachine: STDERR: 
	I0918 13:35:27.887125    6048 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0918 13:35:27.887136    6048 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:35:27.887150    6048 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:35:27.887173    6048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:f7:01:ec:90:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0918 13:35:27.888743    6048 main.go:141] libmachine: STDOUT: 
	I0918 13:35:27.888753    6048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:35:27.888773    6048 client.go:171] duration metric: took 258.484417ms to LocalClient.Create
	I0918 13:35:29.890947    6048 start.go:128] duration metric: took 2.287334583s to createHost
	I0918 13:35:29.891027    6048 start.go:83] releasing machines lock for "enable-default-cni-838000", held for 2.287457625s
	W0918 13:35:29.891125    6048 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:29.909640    6048 out.go:177] * Deleting "enable-default-cni-838000" in qemu2 ...
	W0918 13:35:29.944471    6048 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:29.944490    6048 start.go:729] Will try again in 5 seconds ...
	I0918 13:35:35.000683    6048 start.go:360] acquireMachinesLock for enable-default-cni-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:35:35.007125    6048 start.go:364] duration metric: took 6.376834ms to acquireMachinesLock for "enable-default-cni-838000"
	I0918 13:35:35.007232    6048 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:35:35.007477    6048 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:35:35.022922    6048 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:35:35.075314    6048 start.go:159] libmachine.API.Create for "enable-default-cni-838000" (driver="qemu2")
	I0918 13:35:35.075384    6048 client.go:168] LocalClient.Create starting
	I0918 13:35:35.075544    6048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:35:35.075610    6048 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:35.075629    6048 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:35.075688    6048 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:35:35.075735    6048 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:35.075747    6048 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:35.076259    6048 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:35:35.392760    6048 main.go:141] libmachine: Creating SSH key...
	I0918 13:35:35.560793    6048 main.go:141] libmachine: Creating Disk image...
	I0918 13:35:35.560800    6048 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:35:35.561005    6048 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0918 13:35:35.570834    6048 main.go:141] libmachine: STDOUT: 
	I0918 13:35:35.570854    6048 main.go:141] libmachine: STDERR: 
	I0918 13:35:35.570918    6048 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2 +20000M
	I0918 13:35:35.578962    6048 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:35:35.578977    6048 main.go:141] libmachine: STDERR: 
	I0918 13:35:35.578989    6048 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0918 13:35:35.578998    6048 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:35:35.579008    6048 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:35:35.579047    6048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:db:7a:24:38:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0918 13:35:35.580750    6048 main.go:141] libmachine: STDOUT: 
	I0918 13:35:35.580764    6048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:35:35.580783    6048 client.go:171] duration metric: took 505.397ms to LocalClient.Create
	I0918 13:35:37.582924    6048 start.go:128] duration metric: took 2.575447542s to createHost
	I0918 13:35:37.582969    6048 start.go:83] releasing machines lock for "enable-default-cni-838000", held for 2.575833125s
	W0918 13:35:37.583291    6048 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:37.597803    6048 out.go:201] 
	W0918 13:35:37.600807    6048 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:35:37.600882    6048 out.go:270] * 
	* 
	W0918 13:35:37.603693    6048 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:35:37.609626    6048 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (12.153462792s)

                                                
                                                
-- stdout --
	* [bridge-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-838000" primary control-plane node in "bridge-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:35:35.244559    6076 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:35:35.244703    6076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:35:35.244707    6076 out.go:358] Setting ErrFile to fd 2...
	I0918 13:35:35.244709    6076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:35:35.244857    6076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:35:35.246169    6076 out.go:352] Setting JSON to false
	I0918 13:35:35.265400    6076 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3894,"bootTime":1726687841,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:35:35.265481    6076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:35:35.276086    6076 out.go:177] * [bridge-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:35:35.283519    6076 notify.go:220] Checking for updates...
	I0918 13:35:35.290057    6076 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:35:35.296859    6076 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:35:35.306994    6076 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:35:35.314896    6076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:35:35.322940    6076 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:35:35.328939    6076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:35:35.333376    6076 config.go:182] Loaded profile config "enable-default-cni-838000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:35:35.333454    6076 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:35:35.333510    6076 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:35:35.338985    6076 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:35:35.347946    6076 start.go:297] selected driver: qemu2
	I0918 13:35:35.347961    6076 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:35:35.347974    6076 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:35:35.350796    6076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:35:35.355873    6076 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:35:35.361068    6076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:35:35.361101    6076 cni.go:84] Creating CNI manager for "bridge"
	I0918 13:35:35.361107    6076 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 13:35:35.361145    6076 start.go:340] cluster config:
	{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:35:35.366026    6076 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:35:35.377081    6076 out.go:177] * Starting "bridge-838000" primary control-plane node in "bridge-838000" cluster
	I0918 13:35:35.385926    6076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:35:35.385950    6076 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:35:35.385963    6076 cache.go:56] Caching tarball of preloaded images
	I0918 13:35:35.386075    6076 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:35:35.386082    6076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:35:35.386169    6076 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/bridge-838000/config.json ...
	I0918 13:35:35.386183    6076 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/bridge-838000/config.json: {Name:mk686627b840cb911e6a89c44ac01e651903be34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:35:35.386661    6076 start.go:360] acquireMachinesLock for bridge-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:35:37.583161    6076 start.go:364] duration metric: took 2.196481042s to acquireMachinesLock for "bridge-838000"
	I0918 13:35:37.583334    6076 start.go:93] Provisioning new machine with config: &{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:35:37.583615    6076 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:35:37.594748    6076 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:35:37.646897    6076 start.go:159] libmachine.API.Create for "bridge-838000" (driver="qemu2")
	I0918 13:35:37.646965    6076 client.go:168] LocalClient.Create starting
	I0918 13:35:37.647110    6076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:35:37.647179    6076 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:37.647198    6076 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:37.647278    6076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:35:37.647323    6076 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:37.647351    6076 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:37.648032    6076 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:35:37.818918    6076 main.go:141] libmachine: Creating SSH key...
	I0918 13:35:37.918299    6076 main.go:141] libmachine: Creating Disk image...
	I0918 13:35:37.918310    6076 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:35:37.918535    6076 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2
	I0918 13:35:37.928479    6076 main.go:141] libmachine: STDOUT: 
	I0918 13:35:37.928501    6076 main.go:141] libmachine: STDERR: 
	I0918 13:35:37.928574    6076 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2 +20000M
	I0918 13:35:37.937586    6076 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:35:37.937612    6076 main.go:141] libmachine: STDERR: 
	I0918 13:35:37.937638    6076 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2
	I0918 13:35:37.937643    6076 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:35:37.937656    6076 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:35:37.937693    6076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:dc:de:3b:5a:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2
	I0918 13:35:37.939446    6076 main.go:141] libmachine: STDOUT: 
	I0918 13:35:37.939461    6076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:35:37.939483    6076 client.go:171] duration metric: took 292.51275ms to LocalClient.Create
	I0918 13:35:39.941575    6076 start.go:128] duration metric: took 2.357972791s to createHost
	I0918 13:35:39.941592    6076 start.go:83] releasing machines lock for "bridge-838000", held for 2.35841825s
	W0918 13:35:39.941604    6076 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:39.958479    6076 out.go:177] * Deleting "bridge-838000" in qemu2 ...
	W0918 13:35:39.970859    6076 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:39.970875    6076 start.go:729] Will try again in 5 seconds ...
	I0918 13:35:44.973101    6076 start.go:360] acquireMachinesLock for bridge-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:35:44.973650    6076 start.go:364] duration metric: took 442.25µs to acquireMachinesLock for "bridge-838000"
	I0918 13:35:44.973817    6076 start.go:93] Provisioning new machine with config: &{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:35:44.974173    6076 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:35:44.983544    6076 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:35:45.035275    6076 start.go:159] libmachine.API.Create for "bridge-838000" (driver="qemu2")
	I0918 13:35:45.035325    6076 client.go:168] LocalClient.Create starting
	I0918 13:35:45.035432    6076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:35:45.035494    6076 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:45.035517    6076 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:45.035596    6076 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:35:45.035641    6076 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:45.035653    6076 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:45.036188    6076 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:35:45.207340    6076 main.go:141] libmachine: Creating SSH key...
	I0918 13:35:45.307219    6076 main.go:141] libmachine: Creating Disk image...
	I0918 13:35:45.307224    6076 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:35:45.307400    6076 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2
	I0918 13:35:45.316807    6076 main.go:141] libmachine: STDOUT: 
	I0918 13:35:45.316832    6076 main.go:141] libmachine: STDERR: 
	I0918 13:35:45.316903    6076 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2 +20000M
	I0918 13:35:45.324858    6076 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:35:45.324875    6076 main.go:141] libmachine: STDERR: 
	I0918 13:35:45.324890    6076 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2
	I0918 13:35:45.324897    6076 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:35:45.324907    6076 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:35:45.324933    6076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:ac:1d:e8:eb:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/bridge-838000/disk.qcow2
	I0918 13:35:45.326545    6076 main.go:141] libmachine: STDOUT: 
	I0918 13:35:45.326560    6076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:35:45.326573    6076 client.go:171] duration metric: took 291.245875ms to LocalClient.Create
	I0918 13:35:47.328711    6076 start.go:128] duration metric: took 2.35453225s to createHost
	I0918 13:35:47.328811    6076 start.go:83] releasing machines lock for "bridge-838000", held for 2.355160333s
	W0918 13:35:47.329133    6076 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:47.345726    6076 out.go:201] 
	W0918 13:35:47.350878    6076 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:35:47.350925    6076 out.go:270] * 
	* 
	W0918 13:35:47.352976    6076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:35:47.359096    6076 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.047390083s)

                                                
                                                
-- stdout --
	* [kubenet-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-838000" primary control-plane node in "kubenet-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:35:39.813939    6184 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:35:39.814068    6184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:35:39.814072    6184 out.go:358] Setting ErrFile to fd 2...
	I0918 13:35:39.814074    6184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:35:39.814232    6184 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:35:39.815307    6184 out.go:352] Setting JSON to false
	I0918 13:35:39.831232    6184 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3898,"bootTime":1726687841,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:35:39.831330    6184 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:35:39.838587    6184 out.go:177] * [kubenet-838000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:35:39.846673    6184 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:35:39.846723    6184 notify.go:220] Checking for updates...
	I0918 13:35:39.851537    6184 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:35:39.854575    6184 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:35:39.857606    6184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:35:39.860561    6184 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:35:39.863521    6184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:35:39.866921    6184 config.go:182] Loaded profile config "bridge-838000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:35:39.867000    6184 config.go:182] Loaded profile config "multinode-400000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:35:39.867052    6184 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:35:39.871489    6184 out.go:177] * Using the qemu2 driver based on user configuration
	I0918 13:35:39.878525    6184 start.go:297] selected driver: qemu2
	I0918 13:35:39.878531    6184 start.go:901] validating driver "qemu2" against <nil>
	I0918 13:35:39.878538    6184 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:35:39.880757    6184 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 13:35:39.883525    6184 out.go:177] * Automatically selected the socket_vmnet network
	I0918 13:35:39.886639    6184 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 13:35:39.886665    6184 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0918 13:35:39.886693    6184 start.go:340] cluster config:
	{Name:kubenet-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:35:39.890507    6184 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 13:35:39.896486    6184 out.go:177] * Starting "kubenet-838000" primary control-plane node in "kubenet-838000" cluster
	I0918 13:35:39.900553    6184 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 13:35:39.900568    6184 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 13:35:39.900575    6184 cache.go:56] Caching tarball of preloaded images
	I0918 13:35:39.900640    6184 preload.go:172] Found /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 13:35:39.900646    6184 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 13:35:39.900710    6184 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/kubenet-838000/config.json ...
	I0918 13:35:39.900722    6184 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/kubenet-838000/config.json: {Name:mke046b690f38f440a4fc6b6ded7e25c55fa7acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 13:35:39.900959    6184 start.go:360] acquireMachinesLock for kubenet-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:35:39.941639    6184 start.go:364] duration metric: took 40.65625ms to acquireMachinesLock for "kubenet-838000"
	I0918 13:35:39.941666    6184 start.go:93] Provisioning new machine with config: &{Name:kubenet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:35:39.941737    6184 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:35:39.949580    6184 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:35:39.972896    6184 start.go:159] libmachine.API.Create for "kubenet-838000" (driver="qemu2")
	I0918 13:35:39.972936    6184 client.go:168] LocalClient.Create starting
	I0918 13:35:39.973006    6184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:35:39.973040    6184 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:39.973054    6184 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:39.973097    6184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:35:39.973123    6184 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:39.973132    6184 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:39.973497    6184 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:35:40.138891    6184 main.go:141] libmachine: Creating SSH key...
	I0918 13:35:40.222912    6184 main.go:141] libmachine: Creating Disk image...
	I0918 13:35:40.222917    6184 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:35:40.223088    6184 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2
	I0918 13:35:40.232628    6184 main.go:141] libmachine: STDOUT: 
	I0918 13:35:40.232643    6184 main.go:141] libmachine: STDERR: 
	I0918 13:35:40.232709    6184 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2 +20000M
	I0918 13:35:40.240567    6184 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:35:40.240586    6184 main.go:141] libmachine: STDERR: 
	I0918 13:35:40.240606    6184 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2
	I0918 13:35:40.240617    6184 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:35:40.240626    6184 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:35:40.240653    6184 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:d5:30:99:f0:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2
	I0918 13:35:40.242342    6184 main.go:141] libmachine: STDOUT: 
	I0918 13:35:40.242354    6184 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:35:40.242374    6184 client.go:171] duration metric: took 269.4335ms to LocalClient.Create
	I0918 13:35:42.244583    6184 start.go:128] duration metric: took 2.302852334s to createHost
	I0918 13:35:42.244627    6184 start.go:83] releasing machines lock for "kubenet-838000", held for 2.302997041s
	W0918 13:35:42.244666    6184 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:42.264955    6184 out.go:177] * Deleting "kubenet-838000" in qemu2 ...
	W0918 13:35:42.302262    6184 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:42.302283    6184 start.go:729] Will try again in 5 seconds ...
	I0918 13:35:47.304404    6184 start.go:360] acquireMachinesLock for kubenet-838000: {Name:mke6cb59fa7afdc4e90d5eea38d2b839bb894704 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0918 13:35:47.328952    6184 start.go:364] duration metric: took 24.3755ms to acquireMachinesLock for "kubenet-838000"
	I0918 13:35:47.329103    6184 start.go:93] Provisioning new machine with config: &{Name:kubenet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 13:35:47.329369    6184 start.go:125] createHost starting for "" (driver="qemu2")
	I0918 13:35:47.337817    6184 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0918 13:35:47.390091    6184 start.go:159] libmachine.API.Create for "kubenet-838000" (driver="qemu2")
	I0918 13:35:47.390145    6184 client.go:168] LocalClient.Create starting
	I0918 13:35:47.390272    6184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/ca.pem
	I0918 13:35:47.390326    6184 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:47.390342    6184 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:47.390406    6184 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19667-1040/.minikube/certs/cert.pem
	I0918 13:35:47.390436    6184 main.go:141] libmachine: Decoding PEM data...
	I0918 13:35:47.390455    6184 main.go:141] libmachine: Parsing certificate...
	I0918 13:35:47.390941    6184 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0918 13:35:47.572919    6184 main.go:141] libmachine: Creating SSH key...
	I0918 13:35:47.769065    6184 main.go:141] libmachine: Creating Disk image...
	I0918 13:35:47.769076    6184 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0918 13:35:47.769269    6184 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2
	I0918 13:35:47.779425    6184 main.go:141] libmachine: STDOUT: 
	I0918 13:35:47.779449    6184 main.go:141] libmachine: STDERR: 
	I0918 13:35:47.779508    6184 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2 +20000M
	I0918 13:35:47.789021    6184 main.go:141] libmachine: STDOUT: Image resized.
	
	I0918 13:35:47.789042    6184 main.go:141] libmachine: STDERR: 
	I0918 13:35:47.789057    6184 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2
	I0918 13:35:47.789062    6184 main.go:141] libmachine: Starting QEMU VM...
	I0918 13:35:47.789072    6184 qemu.go:418] Using hvf for hardware acceleration
	I0918 13:35:47.789107    6184 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:a7:2a:c0:0e:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19667-1040/.minikube/machines/kubenet-838000/disk.qcow2
	I0918 13:35:47.790905    6184 main.go:141] libmachine: STDOUT: 
	I0918 13:35:47.790921    6184 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0918 13:35:47.790933    6184 client.go:171] duration metric: took 400.786917ms to LocalClient.Create
	I0918 13:35:49.793119    6184 start.go:128] duration metric: took 2.463722834s to createHost
	I0918 13:35:49.793317    6184 start.go:83] releasing machines lock for "kubenet-838000", held for 2.464242583s
	W0918 13:35:49.793615    6184 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0918 13:35:49.804052    6184 out.go:201] 
	W0918 13:35:49.807294    6184 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0918 13:35:49.807319    6184 out.go:270] * 
	* 
	W0918 13:35:49.809907    6184 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 13:35:49.820213    6184 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.05s)

                                                
                                    

Test pass (155/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 7.62
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.38
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 205.22
29 TestAddons/serial/Volcano 37.51
31 TestAddons/serial/GCPAuth/Namespaces 0.1
34 TestAddons/parallel/Ingress 17.72
35 TestAddons/parallel/InspektorGadget 10.31
36 TestAddons/parallel/MetricsServer 5.3
39 TestAddons/parallel/CSI 31.7
40 TestAddons/parallel/Headlamp 16.68
41 TestAddons/parallel/CloudSpanner 5.21
42 TestAddons/parallel/LocalPath 40.9
43 TestAddons/parallel/NvidiaDevicePlugin 6.19
44 TestAddons/parallel/Yakd 10.3
45 TestAddons/StoppedEnableDisable 9.4
53 TestHyperKitDriverInstallOrUpdate 10.89
56 TestErrorSpam/setup 34.57
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.71
60 TestErrorSpam/unpause 0.64
61 TestErrorSpam/stop 64.31
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 43.37
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.04
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.83
73 TestFunctional/serial/CacheCmd/cache/add_local 1.13
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.62
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.8
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 278.15
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.56
84 TestFunctional/serial/LogsFileCmd 0.57
85 TestFunctional/serial/InvalidService 3.8
87 TestFunctional/parallel/ConfigCmd 0.22
88 TestFunctional/parallel/DashboardCmd 13.62
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.23
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 24.75
99 TestFunctional/parallel/SSHCmd 0.12
100 TestFunctional/parallel/CpCmd 0.39
102 TestFunctional/parallel/FileSync 0.06
103 TestFunctional/parallel/CertSync 0.37
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.08
111 TestFunctional/parallel/License 0.25
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.18
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.06
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.79
119 TestFunctional/parallel/ImageCommands/Setup 1.8
120 TestFunctional/parallel/DockerEnv/bash 0.26
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
124 TestFunctional/parallel/ServiceCmd/DeployApp 10.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.52
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.27
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.11
137 TestFunctional/parallel/ServiceCmd/List 0.11
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
140 TestFunctional/parallel/ServiceCmd/Format 0.09
141 TestFunctional/parallel/ServiceCmd/URL 0.09
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
149 TestFunctional/parallel/ProfileCmd/profile_list 0.13
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
151 TestFunctional/parallel/MountCmd/any-port 4.99
152 TestFunctional/parallel/MountCmd/specific-port 0.97
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.77
154 TestFunctional/delete_echo-server_images 0.05
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 178.8
161 TestMultiControlPlane/serial/DeployApp 4.41
162 TestMultiControlPlane/serial/PingHostFromPods 0.76
163 TestMultiControlPlane/serial/AddWorkerNode 55.63
164 TestMultiControlPlane/serial/NodeLabels 0.12
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.29
166 TestMultiControlPlane/serial/CopyFile 4.21
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.04
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 2.09
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
247 TestNoKubernetes/serial/StartNoK8sWithVersion 0.15
251 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
252 TestNoKubernetes/serial/ProfileList 0.09
253 TestNoKubernetes/serial/Stop 3.61
255 TestStoppedBinaryUpgrade/Setup 1.03
257 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
258 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
285 TestStartStop/group/old-k8s-version/serial/Stop 2.96
286 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
296 TestStartStop/group/no-preload/serial/Stop 3.21
297 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
307 TestStartStop/group/embed-certs/serial/Stop 1.75
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.91
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
329 TestStartStop/group/newest-cni/serial/Stop 3.61
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0918 12:37:22.283448    1516 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0918 12:37:22.283885    1516 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-576000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-576000: exit status 85 (95.449875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-576000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT |          |
	|         | -p download-only-576000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 12:37:09
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 12:37:09.072665    1517 out.go:345] Setting OutFile to fd 1 ...
	I0918 12:37:09.073065    1517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:09.073069    1517 out.go:358] Setting ErrFile to fd 2...
	I0918 12:37:09.073072    1517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:09.073271    1517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	W0918 12:37:09.073377    1517 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19667-1040/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19667-1040/.minikube/config/config.json: no such file or directory
	I0918 12:37:09.074900    1517 out.go:352] Setting JSON to true
	I0918 12:37:09.093374    1517 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":388,"bootTime":1726687841,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 12:37:09.093501    1517 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 12:37:09.099601    1517 out.go:97] [download-only-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 12:37:09.099766    1517 notify.go:220] Checking for updates...
	W0918 12:37:09.099820    1517 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 12:37:09.101441    1517 out.go:169] MINIKUBE_LOCATION=19667
	I0918 12:37:09.104603    1517 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 12:37:09.108752    1517 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:37:09.111588    1517 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:37:09.114545    1517 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	W0918 12:37:09.120533    1517 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 12:37:09.120737    1517 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 12:37:09.124530    1517 out.go:97] Using the qemu2 driver based on user configuration
	I0918 12:37:09.124546    1517 start.go:297] selected driver: qemu2
	I0918 12:37:09.124559    1517 start.go:901] validating driver "qemu2" against <nil>
	I0918 12:37:09.124629    1517 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 12:37:09.127622    1517 out.go:169] Automatically selected the socket_vmnet network
	I0918 12:37:09.132261    1517 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0918 12:37:09.132368    1517 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 12:37:09.132393    1517 cni.go:84] Creating CNI manager for ""
	I0918 12:37:09.132427    1517 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 12:37:09.132492    1517 start.go:340] cluster config:
	{Name:download-only-576000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 12:37:09.138078    1517 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:09.142452    1517 out.go:97] Downloading VM boot image ...
	I0918 12:37:09.142467    1517 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0918 12:37:15.036513    1517 out.go:97] Starting "download-only-576000" primary control-plane node in "download-only-576000" cluster
	I0918 12:37:15.036532    1517 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 12:37:15.095946    1517 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0918 12:37:15.095954    1517 cache.go:56] Caching tarball of preloaded images
	I0918 12:37:15.096115    1517 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 12:37:15.101701    1517 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0918 12:37:15.101707    1517 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 12:37:15.198702    1517 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0918 12:37:20.968638    1517 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 12:37:20.968788    1517 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 12:37:21.664196    1517 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0918 12:37:21.664384    1517 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/download-only-576000/config.json ...
	I0918 12:37:21.664401    1517 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/download-only-576000/config.json: {Name:mk44c4b52f07432554c5b53c20e72a7d2815a96c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 12:37:21.664637    1517 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 12:37:21.664849    1517 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0918 12:37:22.236091    1517 out.go:193] 
	W0918 12:37:22.241236    1517 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19667-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106759780 0x106759780 0x106759780 0x106759780 0x106759780 0x106759780 0x106759780] Decompressors:map[bz2:0x1400074f490 gz:0x1400074f498 tar:0x1400074f440 tar.bz2:0x1400074f450 tar.gz:0x1400074f460 tar.xz:0x1400074f470 tar.zst:0x1400074f480 tbz2:0x1400074f450 tgz:0x1400074f460 txz:0x1400074f470 tzst:0x1400074f480 xz:0x1400074f4a0 zip:0x1400074f4b0 zst:0x1400074f4a8] Getters:map[file:0x1400057c9f0 http:0x14000734410 https:0x14000734460] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0918 12:37:22.241262    1517 out_reason.go:110] 
	W0918 12:37:22.250062    1517 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 12:37:22.254124    1517 out.go:193] 
	
	
	* The control-plane node download-only-576000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-576000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-576000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-832000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-832000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (7.623606292s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0918 12:37:30.253492    1516 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0918 12:37:30.253555    1516 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-832000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-832000: exit status 85 (77.700417ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-576000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT |                     |
	|         | -p download-only-576000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT | 18 Sep 24 12:37 PDT |
	| delete  | -p download-only-576000        | download-only-576000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT | 18 Sep 24 12:37 PDT |
	| start   | -o=json --download-only        | download-only-832000 | jenkins | v1.34.0 | 18 Sep 24 12:37 PDT |                     |
	|         | -p download-only-832000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 12:37:22
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 12:37:22.657554    1545 out.go:345] Setting OutFile to fd 1 ...
	I0918 12:37:22.657674    1545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:22.657677    1545 out.go:358] Setting ErrFile to fd 2...
	I0918 12:37:22.657680    1545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 12:37:22.657808    1545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 12:37:22.658854    1545 out.go:352] Setting JSON to true
	I0918 12:37:22.675640    1545 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":401,"bootTime":1726687841,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 12:37:22.675702    1545 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 12:37:22.680040    1545 out.go:97] [download-only-832000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 12:37:22.680142    1545 notify.go:220] Checking for updates...
	I0918 12:37:22.684067    1545 out.go:169] MINIKUBE_LOCATION=19667
	I0918 12:37:22.687080    1545 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 12:37:22.692051    1545 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 12:37:22.695030    1545 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 12:37:22.698039    1545 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	W0918 12:37:22.704055    1545 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 12:37:22.704206    1545 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 12:37:22.707025    1545 out.go:97] Using the qemu2 driver based on user configuration
	I0918 12:37:22.707033    1545 start.go:297] selected driver: qemu2
	I0918 12:37:22.707036    1545 start.go:901] validating driver "qemu2" against <nil>
	I0918 12:37:22.707072    1545 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 12:37:22.710060    1545 out.go:169] Automatically selected the socket_vmnet network
	I0918 12:37:22.715197    1545 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0918 12:37:22.715281    1545 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 12:37:22.715298    1545 cni.go:84] Creating CNI manager for ""
	I0918 12:37:22.715330    1545 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 12:37:22.715339    1545 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 12:37:22.715388    1545 start.go:340] cluster config:
	{Name:download-only-832000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-832000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 12:37:22.718686    1545 iso.go:125] acquiring lock: {Name:mk56dd873d2fcdcac38fd42ee550d8f8cd56c237 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 12:37:22.722068    1545 out.go:97] Starting "download-only-832000" primary control-plane node in "download-only-832000" cluster
	I0918 12:37:22.722074    1545 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 12:37:22.784817    1545 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 12:37:22.784833    1545 cache.go:56] Caching tarball of preloaded images
	I0918 12:37:22.784995    1545 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 12:37:22.789033    1545 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0918 12:37:22.789040    1545 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0918 12:37:22.873119    1545 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19667-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-832000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-832000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-832000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.38s)

                                                
                                                
=== RUN   TestBinaryMirror
I0918 12:37:30.736695    1516 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-256000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-256000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-256000
--- PASS: TestBinaryMirror (0.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-476000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-476000: exit status 85 (62.843167ms)

                                                
                                                
-- stdout --
	* Profile "addons-476000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-476000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-476000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-476000: exit status 85 (56.888459ms)

                                                
                                                
-- stdout --
	* Profile "addons-476000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-476000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (205.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-476000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-476000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m25.217027625s)
--- PASS: TestAddons/Setup (205.22s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.51s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.963125ms
addons_test.go:905: volcano-admission stabilized in 8.052833ms
addons_test.go:897: volcano-scheduler stabilized in 8.084708ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-w287n" [b0aa5e9d-8532-4329-bcb9-9f75c4fbd41c] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.007468292s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-nr294" [10879d80-b9d4-423c-a7f2-7a46dd366f4e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.01141s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-hxqcs" [a8560511-bb77-430b-a6cc-2f3427a4cdc2] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.008388583s
addons_test.go:932: (dbg) Run:  kubectl --context addons-476000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-476000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-476000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [73dd85f9-aca8-42c5-ab46-4e63f858093e] Pending
helpers_test.go:344: "test-job-nginx-0" [73dd85f9-aca8-42c5-ab46-4e63f858093e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [73dd85f9-aca8-42c5-ab46-4e63f858093e] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.010962833s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-476000 addons disable volcano --alsologtostderr -v=1: (10.240875208s)
--- PASS: TestAddons/serial/Volcano (37.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-476000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-476000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-476000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-476000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-476000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [11717648-510d-4c69-b8f1-41d720436dea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [11717648-510d-4c69-b8f1-41d720436dea] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.01229475s
I0918 12:50:31.743058    1516 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-476000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-476000 addons disable ingress --alsologtostderr -v=1: (7.372163167s)
--- PASS: TestAddons/parallel/Ingress (17.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nbmxg" [b7b82149-4782-4f0e-80f0-6802f659237a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007607125s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-476000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-476000: (5.300453958s)
--- PASS: TestAddons/parallel/InspektorGadget (10.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.470167ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4jn9k" [32f905d8-b16d-4b06-842f-d4fd0ea4a6d1] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009493667s
addons_test.go:417: (dbg) Run:  kubectl --context addons-476000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (31.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0918 12:49:35.120331    1516 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.756666ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-476000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-476000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [13f46bd5-34cd-4b84-8bd1-4ab1171794f0] Pending
helpers_test.go:344: "task-pv-pod" [13f46bd5-34cd-4b84-8bd1-4ab1171794f0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [13f46bd5-34cd-4b84-8bd1-4ab1171794f0] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.009378s
addons_test.go:590: (dbg) Run:  kubectl --context addons-476000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-476000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-476000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-476000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-476000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-476000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-476000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5ffcfadc-12ef-4583-b2ee-6897c2be8bd6] Pending
helpers_test.go:344: "task-pv-pod-restore" [5ffcfadc-12ef-4583-b2ee-6897c2be8bd6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5ffcfadc-12ef-4583-b2ee-6897c2be8bd6] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.011311667s
addons_test.go:632: (dbg) Run:  kubectl --context addons-476000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-476000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-476000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-476000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.135384375s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (31.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-476000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-xclsl" [6d5d31e2-e770-4ede-9244-9683567d74ae] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-xclsl" [6d5d31e2-e770-4ede-9244-9683567d74ae] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.008746542s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-476000 addons disable headlamp --alsologtostderr -v=1: (5.302089666s)
--- PASS: TestAddons/parallel/Headlamp (16.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-wlp4j" [e7cbb30a-37ab-4bdc-b613-30bd2ef1008c] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008840292s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-476000
--- PASS: TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.9s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-476000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-476000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476000 get pvc test-pvc -o jsonpath={.status.phase} -n default
2024/09/18 12:50:45 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7fdf125a-b9f7-4506-904b-345d59683639] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7fdf125a-b9f7-4506-904b-345d59683639] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7fdf125a-b9f7-4506-904b-345d59683639] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.0035755s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-476000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 ssh "cat /opt/local-path-provisioner/pvc-9102924f-1203-4dc1-93dd-9133f9ce5121_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-476000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-476000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-476000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.435508166s)
--- PASS: TestAddons/parallel/LocalPath (40.90s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fmdxx" [d1efddf9-af8b-411f-b498-b4c94a38e667] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008506584s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-476000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xj5dm" [a3000cbb-000a-4a7e-9f92-e64994a49a92] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.009107333s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-476000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-476000 addons disable yakd --alsologtostderr -v=1: (5.2894285s)
--- PASS: TestAddons/parallel/Yakd (10.30s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (9.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-476000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-476000: (9.212229875s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-476000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-476000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-476000
--- PASS: TestAddons/StoppedEnableDisable (9.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.89s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0918 13:32:05.016893    1516 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0918 13:32:05.017178    1516 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19667
- KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3883768286/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
--- PASS: TestHyperKitDriverInstallOrUpdate (10.89s)

                                                
                                    
x
+
TestErrorSpam/setup (34.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-983000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-983000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 --driver=qemu2 : (34.57433875s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (34.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 pause
--- PASS: TestErrorSpam/pause (0.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 unpause
--- PASS: TestErrorSpam/unpause (0.64s)

                                                
                                    
x
+
TestErrorSpam/stop (64.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 stop: (12.206888583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 stop: (26.062829541s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-983000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-983000 stop: (26.035605166s)
--- PASS: TestErrorSpam/stop (64.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19667-1040/.minikube/files/etc/test/nested/copy/1516/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-815000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-815000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (43.366977958s)
--- PASS: TestFunctional/serial/StartWithProxy (43.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0918 12:54:00.141743    1516 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-815000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-815000 --alsologtostderr -v=8: (36.03761775s)
functional_test.go:663: soft start took 36.038086958s for "functional-815000" cluster.
I0918 12:54:36.178117    1516 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (36.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-815000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-815000 cache add registry.k8s.io/pause:3.1: (1.268786958s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3399866042/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 cache add minikube-local-cache-test:functional-815000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 cache delete minikube-local-cache-test:functional-815000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-815000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-815000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (67.80875ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 kubectl -- --context functional-815000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.80s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-815000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-815000 get pods: (1.01756075s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (278.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-815000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0918 12:55:56.306392    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:55:56.314161    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:55:56.327567    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:55:56.350968    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:55:56.394427    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:55:56.477866    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:55:56.641427    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:55:56.965030    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:55:57.608829    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:55:58.892637    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:56:01.456227    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:56:06.577891    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:56:16.821357    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:56:37.304357    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:57:18.266488    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 12:58:40.185003    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-815000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m38.145373417s)
functional_test.go:761: restart took 4m38.145458375s for "functional-815000" cluster.
I0918 12:59:21.000776    1516 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (278.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-815000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1336350159/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.8s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-815000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-815000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-815000: exit status 115 (145.65475ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31807 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-815000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-815000 config get cpus: exit status 14 (31.565042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-815000 config get cpus: exit status 14 (31.481667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-815000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-815000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2766: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-815000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-815000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (120.91425ms)

                                                
                                                
-- stdout --
	* [functional-815000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:00:10.868128    2749 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:00:10.868279    2749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:00:10.868282    2749 out.go:358] Setting ErrFile to fd 2...
	I0918 13:00:10.868284    2749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:00:10.868427    2749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:00:10.869540    2749 out.go:352] Setting JSON to false
	I0918 13:00:10.886681    2749 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1770,"bootTime":1726687840,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:00:10.886795    2749 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:00:10.892078    2749 out.go:177] * [functional-815000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0918 13:00:10.899016    2749 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:00:10.899046    2749 notify.go:220] Checking for updates...
	I0918 13:00:10.906980    2749 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:00:10.910985    2749 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:00:10.913986    2749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:00:10.917022    2749 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:00:10.920024    2749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:00:10.921774    2749 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:00:10.922020    2749 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:00:10.925940    2749 out.go:177] * Using the qemu2 driver based on existing profile
	I0918 13:00:10.932835    2749 start.go:297] selected driver: qemu2
	I0918 13:00:10.932846    2749 start.go:901] validating driver "qemu2" against &{Name:functional-815000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-815000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:00:10.932934    2749 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:00:10.939915    2749 out.go:201] 
	W0918 13:00:10.943983    2749 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0918 13:00:10.950921    2749 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-815000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-815000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-815000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.806959ms)

                                                
                                                
-- stdout --
	* [functional-815000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 13:00:11.089604    2760 out.go:345] Setting OutFile to fd 1 ...
	I0918 13:00:11.089701    2760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:00:11.089704    2760 out.go:358] Setting ErrFile to fd 2...
	I0918 13:00:11.089706    2760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 13:00:11.089826    2760 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
	I0918 13:00:11.091289    2760 out.go:352] Setting JSON to false
	I0918 13:00:11.108530    2760 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1771,"bootTime":1726687840,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0918 13:00:11.108634    2760 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0918 13:00:11.113000    2760 out.go:177] * [functional-815000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0918 13:00:11.119850    2760 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 13:00:11.119912    2760 notify.go:220] Checking for updates...
	I0918 13:00:11.126999    2760 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	I0918 13:00:11.128460    2760 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0918 13:00:11.131985    2760 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 13:00:11.135017    2760 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	I0918 13:00:11.138027    2760 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 13:00:11.141253    2760 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 13:00:11.141488    2760 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 13:00:11.145980    2760 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0918 13:00:11.152973    2760 start.go:297] selected driver: qemu2
	I0918 13:00:11.152985    2760 start.go:901] validating driver "qemu2" against &{Name:functional-815000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-815000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 13:00:11.153087    2760 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 13:00:11.159058    2760 out.go:201] 
	W0918 13:00:11.162867    2760 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0918 13:00:11.166928    2760 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [49219f8e-cc39-4461-8509-e0e48dc19d5c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009319459s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-815000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-815000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-815000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-815000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [62ccd31e-8d1e-429d-b54c-8cb235b989d3] Pending
helpers_test.go:344: "sp-pod" [62ccd31e-8d1e-429d-b54c-8cb235b989d3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [62ccd31e-8d1e-429d-b54c-8cb235b989d3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.001974792s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-815000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-815000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-815000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [124ec4ae-42b0-4a03-b3a5-e859c6d2242d] Pending
helpers_test.go:344: "sp-pod" [124ec4ae-42b0-4a03-b3a5-e859c6d2242d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [124ec4ae-42b0-4a03-b3a5-e859c6d2242d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00570725s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-815000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh -n functional-815000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 cp functional-815000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1180477448/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh -n functional-815000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh -n functional-815000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1516/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "sudo cat /etc/test/nested/copy/1516/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1516.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "sudo cat /etc/ssl/certs/1516.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1516.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "sudo cat /usr/share/ca-certificates/1516.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "sudo cat /etc/ssl/certs/15162.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "sudo cat /usr/share/ca-certificates/15162.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-815000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-815000 ssh "sudo systemctl is-active crio": exit status 1 (76.592459ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-815000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-815000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-815000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-815000 image ls --format short --alsologtostderr:
I0918 13:00:18.421497    2788 out.go:345] Setting OutFile to fd 1 ...
I0918 13:00:18.421884    2788 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:00:18.421888    2788 out.go:358] Setting ErrFile to fd 2...
I0918 13:00:18.421890    2788 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:00:18.422045    2788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
I0918 13:00:18.422445    2788 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 13:00:18.422513    2788 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 13:00:18.423413    2788 ssh_runner.go:195] Run: systemctl --version
I0918 13:00:18.423421    2788 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/functional-815000/id_rsa Username:docker}
I0918 13:00:18.444293    2788 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-815000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| localhost/my-image                          | functional-815000 | a1742f29a86f1 | 1.41MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-815000 | ad24bb8de9de7 | 30B    |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-815000 | ce2d2cda2d858 | 4.78MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-815000 image ls --format table --alsologtostderr:
I0918 13:00:20.414937    2801 out.go:345] Setting OutFile to fd 1 ...
I0918 13:00:20.415097    2801 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:00:20.415101    2801 out.go:358] Setting ErrFile to fd 2...
I0918 13:00:20.415104    2801 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:00:20.415235    2801 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
I0918 13:00:20.415722    2801 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 13:00:20.415783    2801 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 13:00:20.416627    2801 ssh_runner.go:195] Run: systemctl --version
I0918 13:00:20.416635    2801 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/functional-815000/id_rsa Username:docker}
I0918 13:00:20.439213    2801 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/09/18 13:00:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-815000 image ls --format json --alsologtostderr:
[{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"ad24bb8de9de7ef544c2c929c554198723f787813b8f063783883bb454497b55","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-815000"],"size":"30"},{"id":"195245f0c79279e8b8
e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-815000"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"a1742f29a86f1b2c8105d8fa70399310b5765f3e74da0a4222ec8d0809104ade","repoDigests":[],
"repoTags":["localhost/my-image:functional-815000"],"size":"1410000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.3
1.1"],"size":"94700000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-815000 image ls --format json --alsologtostderr:
I0918 13:00:20.350733    2799 out.go:345] Setting OutFile to fd 1 ...
I0918 13:00:20.350882    2799 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:00:20.350886    2799 out.go:358] Setting ErrFile to fd 2...
I0918 13:00:20.350888    2799 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:00:20.351015    2799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
I0918 13:00:20.351392    2799 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 13:00:20.351451    2799 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 13:00:20.352313    2799 ssh_runner.go:195] Run: systemctl --version
I0918 13:00:20.352321    2799 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/functional-815000/id_rsa Username:docker}
I0918 13:00:20.372800    2799 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-815000 image ls --format yaml --alsologtostderr:
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: ad24bb8de9de7ef544c2c929c554198723f787813b8f063783883bb454497b55
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-815000
size: "30"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-815000
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-815000 image ls --format yaml --alsologtostderr:
I0918 13:00:18.493182    2790 out.go:345] Setting OutFile to fd 1 ...
I0918 13:00:18.493371    2790 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:00:18.493375    2790 out.go:358] Setting ErrFile to fd 2...
I0918 13:00:18.493377    2790 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:00:18.493520    2790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
I0918 13:00:18.493961    2790 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 13:00:18.494025    2790 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 13:00:18.494928    2790 ssh_runner.go:195] Run: systemctl --version
I0918 13:00:18.494936    2790 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/functional-815000/id_rsa Username:docker}
I0918 13:00:18.515860    2790 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-815000 ssh pgrep buildkitd: exit status 1 (56.090875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image build -t localhost/my-image:functional-815000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-815000 image build -t localhost/my-image:functional-815000 testdata/build --alsologtostderr: (1.660170542s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-815000 image build -t localhost/my-image:functional-815000 testdata/build --alsologtostderr:
I0918 13:00:18.620205    2794 out.go:345] Setting OutFile to fd 1 ...
I0918 13:00:18.620444    2794 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:00:18.620447    2794 out.go:358] Setting ErrFile to fd 2...
I0918 13:00:18.620450    2794 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 13:00:18.620586    2794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19667-1040/.minikube/bin
I0918 13:00:18.621024    2794 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 13:00:18.621741    2794 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 13:00:18.622709    2794 ssh_runner.go:195] Run: systemctl --version
I0918 13:00:18.622719    2794 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19667-1040/.minikube/machines/functional-815000/id_rsa Username:docker}
I0918 13:00:18.643677    2794 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2762484418.tar
I0918 13:00:18.643749    2794 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0918 13:00:18.647481    2794 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2762484418.tar
I0918 13:00:18.649714    2794 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2762484418.tar: stat -c "%s %y" /var/lib/minikube/build/build.2762484418.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2762484418.tar': No such file or directory
I0918 13:00:18.649742    2794 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2762484418.tar --> /var/lib/minikube/build/build.2762484418.tar (3072 bytes)
I0918 13:00:18.659369    2794 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2762484418
I0918 13:00:18.666081    2794 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2762484418 -xf /var/lib/minikube/build/build.2762484418.tar
I0918 13:00:18.670253    2794 docker.go:360] Building image: /var/lib/minikube/build/build.2762484418
I0918 13:00:18.670310    2794 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-815000 /var/lib/minikube/build/build.2762484418
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:a1742f29a86f1b2c8105d8fa70399310b5765f3e74da0a4222ec8d0809104ade done
#8 naming to localhost/my-image:functional-815000 done
#8 DONE 0.0s
I0918 13:00:20.233476    2794 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-815000 /var/lib/minikube/build/build.2762484418: (1.563218s)
I0918 13:00:20.233565    2794 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2762484418
I0918 13:00:20.238000    2794 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2762484418.tar
I0918 13:00:20.241401    2794 build_images.go:217] Built localhost/my-image:functional-815000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2762484418.tar
I0918 13:00:20.241421    2794 build_images.go:133] succeeded building to: functional-815000
I0918 13:00:20.241425    2794 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.78370075s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-815000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-815000 docker-env) && out/minikube-darwin-arm64 status -p functional-815000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-815000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-815000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-815000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-rmnjn" [ce1bd7ea-cca0-4d5a-8c56-d0dc267b52df] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-rmnjn" [ce1bd7ea-cca0-4d5a-8c56-d0dc267b52df] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.009069167s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image load --daemon kicbase/echo-server:functional-815000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image load --daemon kicbase/echo-server:functional-815000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-815000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image load --daemon kicbase/echo-server:functional-815000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image save kicbase/echo-server:functional-815000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image rm kicbase/echo-server:functional-815000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-815000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 image save --daemon kicbase/echo-server:functional-815000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-815000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-815000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-815000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-815000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-815000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2290: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-815000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-815000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [06d6143f-c8c5-4fbd-ac22-2878f022f408] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [06d6143f-c8c5-4fbd-ac22-2878f022f408] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.006542625s
I0918 12:59:41.510315    1516 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 service list -o json
functional_test.go:1494: Took "80.028917ms" to run "out/minikube-darwin-arm64 -p functional-815000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31630
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31630
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-815000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.74.244 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0918 12:59:41.599244    1516 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0918 12:59:41.636687    1516 config.go:182] Loaded profile config "functional-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-815000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "92.684084ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.856708ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "89.088792ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.9155ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port362016612/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726689603119464000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port362016612/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726689603119464000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port362016612/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726689603119464000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port362016612/001/test-1726689603119464000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (52.179792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 13:00:03.172158    1516 retry.go:31] will retry after 328.363828ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 18 20:00 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 18 20:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 18 20:00 test-1726689603119464000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh cat /mount-9p/test-1726689603119464000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-815000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7eb242a8-75ba-4702-ada5-e163d4f52c17] Pending
helpers_test.go:344: "busybox-mount" [7eb242a8-75ba-4702-ada5-e163d4f52c17] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7eb242a8-75ba-4702-ada5-e163d4f52c17] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7eb242a8-75ba-4702-ada5-e163d4f52c17] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003932167s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-815000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port362016612/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port779639000/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (55.697333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 13:00:08.165704    1516 retry.go:31] will retry after 456.997159ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port779639000/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-815000 ssh "sudo umount -f /mount-9p": exit status 1 (57.904125ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-815000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port779639000/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187698954/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187698954/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187698954/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T" /mount1: exit status 1 (74.005292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 13:00:09.154376    1516 retry.go:31] will retry after 718.196582ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T" /mount2: exit status 1 (51.989166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 13:00:10.031396    1516 retry.go:31] will retry after 561.846794ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-815000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-815000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187698954/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187698954/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-815000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2187698954/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-815000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-815000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-815000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (178.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-660000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0918 13:00:56.293014    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:01:24.021557    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/addons-476000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-660000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m58.606659667s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (178.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-660000 -- rollout status deployment/busybox: (2.899517916s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-cssbs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-djfvb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-pw4dj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-cssbs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-djfvb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-pw4dj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-cssbs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-djfvb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-pw4dj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-cssbs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-cssbs -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-djfvb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-djfvb -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-pw4dj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-660000 -- exec busybox-7dff88458-pw4dj -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-660000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-660000 -v=7 --alsologtostderr: (55.406625167s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-660000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp testdata/cp-test.txt ha-660000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile147687725/001/cp-test_ha-660000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000:/home/docker/cp-test.txt ha-660000-m02:/home/docker/cp-test_ha-660000_ha-660000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m02 "sudo cat /home/docker/cp-test_ha-660000_ha-660000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000:/home/docker/cp-test.txt ha-660000-m03:/home/docker/cp-test_ha-660000_ha-660000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m03 "sudo cat /home/docker/cp-test_ha-660000_ha-660000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000:/home/docker/cp-test.txt ha-660000-m04:/home/docker/cp-test_ha-660000_ha-660000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m04 "sudo cat /home/docker/cp-test_ha-660000_ha-660000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp testdata/cp-test.txt ha-660000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile147687725/001/cp-test_ha-660000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m02:/home/docker/cp-test.txt ha-660000:/home/docker/cp-test_ha-660000-m02_ha-660000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000 "sudo cat /home/docker/cp-test_ha-660000-m02_ha-660000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m02:/home/docker/cp-test.txt ha-660000-m03:/home/docker/cp-test_ha-660000-m02_ha-660000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m03 "sudo cat /home/docker/cp-test_ha-660000-m02_ha-660000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m02:/home/docker/cp-test.txt ha-660000-m04:/home/docker/cp-test_ha-660000-m02_ha-660000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m04 "sudo cat /home/docker/cp-test_ha-660000-m02_ha-660000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp testdata/cp-test.txt ha-660000-m03:/home/docker/cp-test.txt
E0918 13:04:27.254268    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:04:27.261885    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m03 "sudo cat /home/docker/cp-test.txt"
E0918 13:04:27.274550    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
E0918 13:04:27.297951    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile147687725/001/cp-test_ha-660000-m03.txt
E0918 13:04:27.339690    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m03 "sudo cat /home/docker/cp-test.txt"
E0918 13:04:27.421301    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m03:/home/docker/cp-test.txt ha-660000:/home/docker/cp-test_ha-660000-m03_ha-660000.txt
E0918 13:04:27.582874    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000 "sudo cat /home/docker/cp-test_ha-660000-m03_ha-660000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m03:/home/docker/cp-test.txt ha-660000-m02:/home/docker/cp-test_ha-660000-m03_ha-660000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m03 "sudo cat /home/docker/cp-test.txt"
E0918 13:04:27.904802    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m02 "sudo cat /home/docker/cp-test_ha-660000-m03_ha-660000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m03:/home/docker/cp-test.txt ha-660000-m04:/home/docker/cp-test_ha-660000-m03_ha-660000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m04 "sudo cat /home/docker/cp-test_ha-660000-m03_ha-660000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp testdata/cp-test.txt ha-660000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile147687725/001/cp-test_ha-660000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m04:/home/docker/cp-test.txt ha-660000:/home/docker/cp-test_ha-660000-m04_ha-660000.txt
E0918 13:04:28.548056    1516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/functional-815000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000 "sudo cat /home/docker/cp-test_ha-660000-m04_ha-660000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m04:/home/docker/cp-test.txt ha-660000-m02:/home/docker/cp-test_ha-660000-m04_ha-660000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m02 "sudo cat /home/docker/cp-test_ha-660000-m04_ha-660000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 cp ha-660000-m04:/home/docker/cp-test.txt ha-660000-m03:/home/docker/cp-test_ha-660000-m04_ha-660000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-660000 ssh -n ha-660000-m03 "sudo cat /home/docker/cp-test_ha-660000-m04_ha-660000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2.035564542s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-302000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-302000 --output=json --user=testUser: (2.091775959s)
--- PASS: TestJSONOutput/stop/Command (2.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-051000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-051000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.006ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"56572397-c246-4336-8807-547144f3dea6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-051000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca7933b9-977e-4243-a6c5-c258338337af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"701a3022-594c-4886-a832-281fcbf6997e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig"}}
	{"specversion":"1.0","id":"c2e175bb-ad55-4327-bc15-a2778c0f98dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ffb1a51f-b07c-4bcd-b197-1474c16efb6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"980b5225-5f8d-46ae-a9aa-61aa0302cedc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube"}}
	{"specversion":"1.0","id":"25ef3aa7-4293-44e2-a132-2e483ed1a132","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"988baf37-f1a8-408c-a337-9dced8677866","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-051000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-051000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-748000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-748000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (144.911709ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19667-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19667-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-748000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-748000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.5765ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-748000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-748000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-748000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-748000: (3.614048791s)
--- PASS: TestNoKubernetes/serial/Stop (3.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-748000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-748000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (50.36375ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-748000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-748000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-367000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-718000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-718000 --alsologtostderr -v=3: (2.962639333s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-718000 -n old-k8s-version-718000: exit status 7 (66.865833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-718000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-882000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-882000 --alsologtostderr -v=3: (3.206486916s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-882000 -n no-preload-882000: exit status 7 (62.744291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-882000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-969000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-969000 --alsologtostderr -v=3: (1.753700625s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-969000 -n embed-certs-969000: exit status 7 (62.420959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-969000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-826000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-826000 --alsologtostderr -v=3: (3.913221083s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-826000 -n default-k8s-diff-port-826000: exit status 7 (64.290625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-826000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-717000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-717000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-717000 --alsologtostderr -v=3: (3.607311417s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-717000 -n newest-cni-717000: exit status 7 (63.034375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-717000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-995000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-995000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-838000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/jenkins/minikube-integration/19667-1040/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 18 Sep 2024 13:27:31 PDT
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://10.0.2.15:8443
name: stopped-upgrade-367000
contexts:
- context:
cluster: stopped-upgrade-367000
extensions:
- extension:
last-update: Wed, 18 Sep 2024 13:27:31 PDT
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: stopped-upgrade-367000
name: stopped-upgrade-367000
current-context: stopped-upgrade-367000
kind: Config
preferences: {}
users:
- name: stopped-upgrade-367000
user:
client-certificate: /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.crt
client-key: /Users/jenkins/minikube-integration/19667-1040/.minikube/profiles/stopped-upgrade-367000/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                
----------------------- debugLogs end: cilium-838000 [took: 2.193812625s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-838000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-838000
--- SKIP: TestNetworkPlugins/group/cilium (2.30s)

                                                
                                    
Copied to clipboard