Test Report: QEMU_macOS 19790

                    
                      b9d2e2c9658f87d0032c63e9ff5f9056e8d14f14:2024-10-14:36644
                    
                

Test fail (99/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 26.73
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.29
22 TestOffline 10.14
47 TestCertOptions 10.25
48 TestCertExpiration 195.69
49 TestDockerFlags 12.31
50 TestForceSystemdFlag 10.01
51 TestForceSystemdEnv 10.26
96 TestFunctional/parallel/ServiceCmdConnect 34.87
161 TestMultiControlPlane/serial/StartCluster 725.39
162 TestMultiControlPlane/serial/DeployApp 91.04
163 TestMultiControlPlane/serial/PingHostFromPods 0.1
164 TestMultiControlPlane/serial/AddWorkerNode 0.09
165 TestMultiControlPlane/serial/NodeLabels 0.06
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
168 TestMultiControlPlane/serial/StopSecondaryNode 0.12
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
170 TestMultiControlPlane/serial/RestartSecondaryNode 0.15
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 982.78
183 TestJSONOutput/start/Command 725.28
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.09
195 TestJSONOutput/unpause/Command 0.06
215 TestMountStart/serial/StartWithMountFirst 10.15
218 TestMultiNode/serial/FreshStart2Nodes 9.86
219 TestMultiNode/serial/DeployApp2Nodes 116.94
220 TestMultiNode/serial/PingHostFrom2Pods 0.09
221 TestMultiNode/serial/AddNode 0.08
222 TestMultiNode/serial/MultiNodeLabels 0.07
223 TestMultiNode/serial/ProfileList 0.09
224 TestMultiNode/serial/CopyFile 0.07
225 TestMultiNode/serial/StopNode 0.15
226 TestMultiNode/serial/StartAfterStop 43.86
227 TestMultiNode/serial/RestartKeepsNodes 8.81
228 TestMultiNode/serial/DeleteNode 0.11
229 TestMultiNode/serial/StopMultiNode 3
230 TestMultiNode/serial/RestartMultiNode 5.27
231 TestMultiNode/serial/ValidateNameConflict 20.07
235 TestPreload 9.97
237 TestScheduledStopUnix 9.98
238 TestSkaffold 12.75
241 TestRunningBinaryUpgrade 604.13
243 TestKubernetesUpgrade 18.76
257 TestStoppedBinaryUpgrade/Upgrade 618.24
266 TestPause/serial/Start 9.91
270 TestNoKubernetes/serial/StartWithK8s 9.93
271 TestNoKubernetes/serial/StartWithStopK8s 5.31
272 TestNoKubernetes/serial/Start 5.31
276 TestNoKubernetes/serial/StartNoArgs 5.76
277 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.78
278 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.42
280 TestNetworkPlugins/group/auto/Start 9.76
281 TestNetworkPlugins/group/flannel/Start 10.03
282 TestNetworkPlugins/group/enable-default-cni/Start 9.97
283 TestNetworkPlugins/group/kindnet/Start 9.76
284 TestNetworkPlugins/group/bridge/Start 9.89
285 TestNetworkPlugins/group/kubenet/Start 10.04
286 TestNetworkPlugins/group/custom-flannel/Start 9.91
287 TestNetworkPlugins/group/calico/Start 9.96
288 TestNetworkPlugins/group/false/Start 9.86
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.83
291 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
295 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
299 TestStartStop/group/old-k8s-version/serial/Pause 0.11
301 TestStartStop/group/no-preload/serial/FirstStart 9.87
302 TestStartStop/group/no-preload/serial/DeployApp 0.1
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
306 TestStartStop/group/no-preload/serial/SecondStart 5.27
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
310 TestStartStop/group/no-preload/serial/Pause 0.11
312 TestStartStop/group/embed-certs/serial/FirstStart 10.03
313 TestStartStop/group/embed-certs/serial/DeployApp 0.1
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
317 TestStartStop/group/embed-certs/serial/SecondStart 5.24
318 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
319 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
321 TestStartStop/group/embed-certs/serial/Pause 0.11
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.03
325 TestStartStop/group/newest-cni/serial/FirstStart 9.91
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.98
335 TestStartStop/group/newest-cni/serial/SecondStart 5.28
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
343 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (26.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-306000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-306000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (26.728058708s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"875ee700-c7ad-4771-b912-c2175595963f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-306000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"428d31e6-c33b-4cdb-9822-bca317c0ce7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19790"}}
	{"specversion":"1.0","id":"983bcb13-5d09-4b02-bb2c-2d8d3e2d7cd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig"}}
	{"specversion":"1.0","id":"84b67623-dea2-4842-af1f-e44c15684cfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d8562dc4-4b01-40e2-acfb-1173965ed096","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"45174ebe-7924-433b-b330-77e6de403ac0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube"}}
	{"specversion":"1.0","id":"816f491e-85de-46a8-a2a8-f03bd8326351","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"d86757e7-9e8c-4878-9ce8-456190089e38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1977d76f-0c31-47c7-803d-f41517d326e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"168c3777-5eea-4bf2-99b4-e97f5c51eeb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4adb482d-a75d-432a-b649-e5dcdbe83f43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-306000\" primary control-plane node in \"download-only-306000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ef7b37ff-e8d8-4ef1-8b4d-b59c8e6e0759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"01da4450-9aa5-41fb-8f44-1cba007039a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19790-979/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080] Decompressors:map[bz2:0x140006fc8c0 gz:0x140006fc8c8 tar:0x140006fc800 tar.bz2:0x140006fc810 tar.gz:0x140006fc850 tar.xz:0x140006fc860 tar.zst:0x140006fc870 tbz2:0x140006fc810 tgz:0x140
006fc850 txz:0x140006fc860 tzst:0x140006fc870 xz:0x140006fc8d0 zip:0x140006fc8e0 zst:0x140006fc8d8] Getters:map[file:0x1400060f5e0 http:0x140008e00f0 https:0x140008e0140] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"77480420-20ed-4ac4-8a96-5f9e61d0116b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 06:37:57.812424    1498 out.go:345] Setting OutFile to fd 1 ...
	I1014 06:37:57.812602    1498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:37:57.812605    1498 out.go:358] Setting ErrFile to fd 2...
	I1014 06:37:57.812607    1498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:37:57.812743    1498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	W1014 06:37:57.812843    1498 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19790-979/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19790-979/.minikube/config/config.json: no such file or directory
	I1014 06:37:57.814301    1498 out.go:352] Setting JSON to true
	I1014 06:37:57.834286    1498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":447,"bootTime":1728912630,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 06:37:57.834360    1498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 06:37:57.839947    1498 out.go:97] [download-only-306000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 06:37:57.840097    1498 notify.go:220] Checking for updates...
	W1014 06:37:57.840115    1498 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball: no such file or directory
	I1014 06:37:57.842909    1498 out.go:169] MINIKUBE_LOCATION=19790
	I1014 06:37:57.845893    1498 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 06:37:57.850942    1498 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 06:37:57.853975    1498 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 06:37:57.857447    1498 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	W1014 06:37:57.863933    1498 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 06:37:57.864137    1498 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 06:37:57.868723    1498 out.go:97] Using the qemu2 driver based on user configuration
	I1014 06:37:57.868741    1498 start.go:297] selected driver: qemu2
	I1014 06:37:57.868755    1498 start.go:901] validating driver "qemu2" against <nil>
	I1014 06:37:57.868818    1498 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 06:37:57.871916    1498 out.go:169] Automatically selected the socket_vmnet network
	I1014 06:37:57.877882    1498 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1014 06:37:57.877960    1498 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 06:37:57.878002    1498 cni.go:84] Creating CNI manager for ""
	I1014 06:37:57.878045    1498 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1014 06:37:57.878107    1498 start.go:340] cluster config:
	{Name:download-only-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 06:37:57.882902    1498 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 06:37:57.886993    1498 out.go:97] Downloading VM boot image ...
	I1014 06:37:57.887014    1498 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso
	I1014 06:38:10.953527    1498 out.go:97] Starting "download-only-306000" primary control-plane node in "download-only-306000" cluster
	I1014 06:38:10.953559    1498 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 06:38:11.013188    1498 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1014 06:38:11.013213    1498 cache.go:56] Caching tarball of preloaded images
	I1014 06:38:11.013437    1498 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 06:38:11.018143    1498 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1014 06:38:11.018150    1498 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1014 06:38:11.109558    1498 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1014 06:38:23.202758    1498 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1014 06:38:23.203357    1498 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1014 06:38:23.898846    1498 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1014 06:38:23.899049    1498 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/download-only-306000/config.json ...
	I1014 06:38:23.899073    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/download-only-306000/config.json: {Name:mk73d5bf07ad3f3c2a9d2c1a30a6647fa5a1dc82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 06:38:23.899345    1498 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 06:38:23.899585    1498 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1014 06:38:24.458344    1498 out.go:193] 
	W1014 06:38:24.464363    1498 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19790-979/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080] Decompressors:map[bz2:0x140006fc8c0 gz:0x140006fc8c8 tar:0x140006fc800 tar.bz2:0x140006fc810 tar.gz:0x140006fc850 tar.xz:0x140006fc860 tar.zst:0x140006fc870 tbz2:0x140006fc810 tgz:0x140006fc850 txz:0x140006fc860 tzst:0x140006fc870 xz:0x140006fc8d0 zip:0x140006fc8e0 zst:0x140006fc8d8] Getters:map[file:0x1400060f5e0 http:0x140008e00f0 https:0x140008e0140] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1014 06:38:24.464398    1498 out_reason.go:110] 
	W1014 06:38:24.472311    1498 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 06:38:24.475258    1498 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-306000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (26.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19790-979/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
I1014 06:38:36.095447    1497 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-303000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-303000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 : exit status 40 (178.968792ms)

                                                
                                                
-- stdout --
	* [binary-mirror-303000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-303000" primary control-plane node in "binary-mirror-303000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 06:38:36.158891    1573 out.go:345] Setting OutFile to fd 1 ...
	I1014 06:38:36.159028    1573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:38:36.159031    1573 out.go:358] Setting ErrFile to fd 2...
	I1014 06:38:36.159034    1573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:38:36.159182    1573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 06:38:36.160398    1573 out.go:352] Setting JSON to false
	I1014 06:38:36.178053    1573 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":486,"bootTime":1728912630,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 06:38:36.178123    1573 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 06:38:36.183477    1573 out.go:177] * [binary-mirror-303000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 06:38:36.193445    1573 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 06:38:36.193519    1573 notify.go:220] Checking for updates...
	I1014 06:38:36.200474    1573 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 06:38:36.203405    1573 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 06:38:36.206431    1573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 06:38:36.209469    1573 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 06:38:36.212573    1573 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 06:38:36.216446    1573 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 06:38:36.223425    1573 start.go:297] selected driver: qemu2
	I1014 06:38:36.223432    1573 start.go:901] validating driver "qemu2" against <nil>
	I1014 06:38:36.223489    1573 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 06:38:36.226407    1573 out.go:177] * Automatically selected the socket_vmnet network
	I1014 06:38:36.231881    1573 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1014 06:38:36.231976    1573 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 06:38:36.231997    1573 cni.go:84] Creating CNI manager for ""
	I1014 06:38:36.232030    1573 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 06:38:36.232037    1573 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 06:38:36.232096    1573 start.go:340] cluster config:
	{Name:binary-mirror-303000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-303000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49313 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 06:38:36.236598    1573 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 06:38:36.243422    1573 out.go:177] * Starting "binary-mirror-303000" primary control-plane node in "binary-mirror-303000" cluster
	I1014 06:38:36.247289    1573 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 06:38:36.247303    1573 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 06:38:36.247308    1573 cache.go:56] Caching tarball of preloaded images
	I1014 06:38:36.247391    1573 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 06:38:36.247396    1573 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 06:38:36.247591    1573 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/binary-mirror-303000/config.json ...
	I1014 06:38:36.247603    1573 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/binary-mirror-303000/config.json: {Name:mk460e920c081a3e4f3dc41128baf3d7f3654a10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 06:38:36.247929    1573 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 06:38:36.247984    1573 download.go:107] Downloading: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I1014 06:38:36.281476    1573 out.go:201] 
	W1014 06:38:36.285470    1573 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19790-979/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109d39080 0x109d39080 0x109d39080 0x109d39080 0x109d39080 0x109d39080 0x109d39080] Decompressors:map[bz2:0x1400078b360 gz:0x1400078b368 tar:0x1400078b2c0 tar.bz2:0x1400078b2d0 tar.gz:0x1400078b2f0 tar.xz:0x1400078b320 tar.zst:0x1400078b350 tbz2:0x1400078b2d0 tgz:0x1400078b2f0 txz:0x1400078b320 tzst:0x1400078b350 xz:0x1400078b370 zip:0x1400078b3b0 zst:0x1400078b378] Getters:map[file:0x1400078cbe0 http:0x14000c17310 https:0x14000c17360] Dir:f
alse ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19790-979/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109d39080 0x109d39080 0x109d39080 0x109d39080 0x109d39080 0x109d39080 0x109d39080] Decompressors:map[bz2:0x1400078b360 gz:0x1400078b368 tar:0x1400078b2c0 tar.bz2:0x1400078b2d0 tar.gz:0x1400078b2f0 tar.xz:0x1400078b320 tar.zst:0x1400078b350 tbz2:0x1400078b2d0 tgz:0x1400078b2f0 txz:0x1400078b320 tzst:0x1400078b350 xz:0x1400078b370 zip:0x1400078b3b0 zst:0x1400078b378] Getters:map[file:0x1400078cbe0 http:0x14000c17310 https:0x14000c17360] Dir:false ProgressListener:<nil> Insecure:false
DisableSymlinks:false Options:[]}: unexpected EOF
	W1014 06:38:36.285478    1573 out.go:270] * 
	* 
	W1014 06:38:36.285984    1573 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 06:38:36.300426    1573 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-303000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49313" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-303000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-303000
--- FAIL: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestOffline (10.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-533000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-533000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.983256334s)

                                                
                                                
-- stdout --
	* [offline-docker-533000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-533000" primary control-plane node in "offline-docker-533000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-533000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:37:52.510859    3809 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:37:52.511022    3809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:37:52.511026    3809 out.go:358] Setting ErrFile to fd 2...
	I1014 07:37:52.511028    3809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:37:52.511142    3809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:37:52.512456    3809 out.go:352] Setting JSON to false
	I1014 07:37:52.531572    3809 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4042,"bootTime":1728912630,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:37:52.531647    3809 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:37:52.536540    3809 out.go:177] * [offline-docker-533000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:37:52.544454    3809 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:37:52.544484    3809 notify.go:220] Checking for updates...
	I1014 07:37:52.550571    3809 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:37:52.553340    3809 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:37:52.556378    3809 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:37:52.559443    3809 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:37:52.562334    3809 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:37:52.565733    3809 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:37:52.565792    3809 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:37:52.569362    3809 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:37:52.576386    3809 start.go:297] selected driver: qemu2
	I1014 07:37:52.576396    3809 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:37:52.576405    3809 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:37:52.578477    3809 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:37:52.581399    3809 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:37:52.584451    3809 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:37:52.584468    3809 cni.go:84] Creating CNI manager for ""
	I1014 07:37:52.584489    3809 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:37:52.584494    3809 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:37:52.584538    3809 start.go:340] cluster config:
	{Name:offline-docker-533000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:37:52.588987    3809 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:37:52.597237    3809 out.go:177] * Starting "offline-docker-533000" primary control-plane node in "offline-docker-533000" cluster
	I1014 07:37:52.601380    3809 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:37:52.601421    3809 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:37:52.601432    3809 cache.go:56] Caching tarball of preloaded images
	I1014 07:37:52.601522    3809 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:37:52.601527    3809 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:37:52.601607    3809 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/offline-docker-533000/config.json ...
	I1014 07:37:52.601618    3809 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/offline-docker-533000/config.json: {Name:mk654627a51d052149ed78daeb3e215affb65b8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:37:52.601916    3809 start.go:360] acquireMachinesLock for offline-docker-533000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:37:52.601961    3809 start.go:364] duration metric: took 36.583µs to acquireMachinesLock for "offline-docker-533000"
	I1014 07:37:52.601973    3809 start.go:93] Provisioning new machine with config: &{Name:offline-docker-533000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:37:52.601997    3809 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:37:52.610333    3809 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1014 07:37:52.625693    3809 start.go:159] libmachine.API.Create for "offline-docker-533000" (driver="qemu2")
	I1014 07:37:52.625727    3809 client.go:168] LocalClient.Create starting
	I1014 07:37:52.625812    3809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:37:52.625849    3809 main.go:141] libmachine: Decoding PEM data...
	I1014 07:37:52.625862    3809 main.go:141] libmachine: Parsing certificate...
	I1014 07:37:52.625913    3809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:37:52.625943    3809 main.go:141] libmachine: Decoding PEM data...
	I1014 07:37:52.625952    3809 main.go:141] libmachine: Parsing certificate...
	I1014 07:37:52.626326    3809 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:37:52.782876    3809 main.go:141] libmachine: Creating SSH key...
	I1014 07:37:52.884654    3809 main.go:141] libmachine: Creating Disk image...
	I1014 07:37:52.884662    3809 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:37:52.884844    3809 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2
	I1014 07:37:52.895275    3809 main.go:141] libmachine: STDOUT: 
	I1014 07:37:52.895313    3809 main.go:141] libmachine: STDERR: 
	I1014 07:37:52.895388    3809 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2 +20000M
	I1014 07:37:52.904943    3809 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:37:52.904966    3809 main.go:141] libmachine: STDERR: 
	I1014 07:37:52.904983    3809 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2
	I1014 07:37:52.904987    3809 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:37:52.904997    3809 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:37:52.905035    3809 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:15:45:41:db:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2
	I1014 07:37:52.907230    3809 main.go:141] libmachine: STDOUT: 
	I1014 07:37:52.907251    3809 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:37:52.907272    3809 client.go:171] duration metric: took 281.538ms to LocalClient.Create
	I1014 07:37:54.908615    3809 start.go:128] duration metric: took 2.306664209s to createHost
	I1014 07:37:54.908625    3809 start.go:83] releasing machines lock for "offline-docker-533000", held for 2.306710875s
	W1014 07:37:54.908632    3809 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:37:54.911495    3809 out.go:177] * Deleting "offline-docker-533000" in qemu2 ...
	W1014 07:37:54.924273    3809 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:37:54.924284    3809 start.go:729] Will try again in 5 seconds ...
	I1014 07:37:59.926419    3809 start.go:360] acquireMachinesLock for offline-docker-533000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:37:59.926986    3809 start.go:364] duration metric: took 457.916µs to acquireMachinesLock for "offline-docker-533000"
	I1014 07:37:59.927198    3809 start.go:93] Provisioning new machine with config: &{Name:offline-docker-533000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:37:59.927467    3809 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:37:59.940326    3809 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1014 07:37:59.989976    3809 start.go:159] libmachine.API.Create for "offline-docker-533000" (driver="qemu2")
	I1014 07:37:59.990033    3809 client.go:168] LocalClient.Create starting
	I1014 07:37:59.990164    3809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:37:59.990243    3809 main.go:141] libmachine: Decoding PEM data...
	I1014 07:37:59.990285    3809 main.go:141] libmachine: Parsing certificate...
	I1014 07:37:59.990346    3809 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:37:59.990402    3809 main.go:141] libmachine: Decoding PEM data...
	I1014 07:37:59.990413    3809 main.go:141] libmachine: Parsing certificate...
	I1014 07:37:59.991181    3809 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:38:00.158315    3809 main.go:141] libmachine: Creating SSH key...
	I1014 07:38:00.383078    3809 main.go:141] libmachine: Creating Disk image...
	I1014 07:38:00.383092    3809 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:38:00.383364    3809 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2
	I1014 07:38:00.393664    3809 main.go:141] libmachine: STDOUT: 
	I1014 07:38:00.393688    3809 main.go:141] libmachine: STDERR: 
	I1014 07:38:00.393748    3809 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2 +20000M
	I1014 07:38:00.402201    3809 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:38:00.402222    3809 main.go:141] libmachine: STDERR: 
	I1014 07:38:00.402234    3809 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2
	I1014 07:38:00.402239    3809 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:38:00.402251    3809 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:38:00.402284    3809 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:8c:23:d3:2f:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/offline-docker-533000/disk.qcow2
	I1014 07:38:00.404064    3809 main.go:141] libmachine: STDOUT: 
	I1014 07:38:00.404080    3809 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:38:00.404093    3809 client.go:171] duration metric: took 414.062291ms to LocalClient.Create
	I1014 07:38:02.406250    3809 start.go:128] duration metric: took 2.478804291s to createHost
	I1014 07:38:02.406308    3809 start.go:83] releasing machines lock for "offline-docker-533000", held for 2.479353333s
	W1014 07:38:02.406686    3809 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-533000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-533000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:38:02.431441    3809 out.go:201] 
	W1014 07:38:02.438668    3809 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:38:02.438694    3809 out.go:270] * 
	* 
	W1014 07:38:02.440240    3809 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:38:02.453417    3809 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-533000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-14 07:38:02.461192 -0700 PDT m=+3604.798353418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-533000 -n offline-docker-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-533000 -n offline-docker-533000: exit status 7 (52.285417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-533000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-533000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-533000
--- FAIL: TestOffline (10.14s)

                                                
                                    
x
+
TestCertOptions (10.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-702000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-702000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.902785833s)

                                                
                                                
-- stdout --
	* [cert-options-702000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-702000" primary control-plane node in "cert-options-702000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-702000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-702000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-702000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-702000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-702000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (85.010291ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-702000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-702000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-702000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-702000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-702000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-702000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.674125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-702000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-702000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-702000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-702000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-702000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-14 07:49:44.837741 -0700 PDT m=+4307.154396001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-702000 -n cert-options-702000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-702000 -n cert-options-702000: exit status 7 (34.122625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-702000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-702000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-702000
--- FAIL: TestCertOptions (10.25s)

                                                
                                    
x
+
TestCertExpiration (195.69s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-773000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-773000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.284780709s)

                                                
                                                
-- stdout --
	* [cert-expiration-773000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-773000" primary control-plane node in "cert-expiration-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-773000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-773000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-773000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.247406375s)

                                                
                                                
-- stdout --
	* [cert-expiration-773000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-773000" primary control-plane node in "cert-expiration-773000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-773000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-773000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-773000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-773000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-773000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-773000" primary control-plane node in "cert-expiration-773000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-773000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-773000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-773000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-14 07:52:37.345092 -0700 PDT m=+4479.664046793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-773000 -n cert-expiration-773000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-773000 -n cert-expiration-773000: exit status 7 (73.937125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-773000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-773000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-773000
--- FAIL: TestCertExpiration (195.69s)

                                                
                                    
x
+
TestDockerFlags (12.31s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-838000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-838000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.059021209s)

                                                
                                                
-- stdout --
	* [docker-flags-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-838000" primary control-plane node in "docker-flags-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:49:22.427171    4607 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:49:22.427335    4607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:49:22.427338    4607 out.go:358] Setting ErrFile to fd 2...
	I1014 07:49:22.427344    4607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:49:22.427459    4607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:49:22.428627    4607 out.go:352] Setting JSON to false
	I1014 07:49:22.446701    4607 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4732,"bootTime":1728912630,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:49:22.446775    4607 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:49:22.452764    4607 out.go:177] * [docker-flags-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:49:22.461604    4607 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:49:22.461645    4607 notify.go:220] Checking for updates...
	I1014 07:49:22.469625    4607 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:49:22.473675    4607 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:49:22.475123    4607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:49:22.478590    4607 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:49:22.481650    4607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:49:22.484957    4607 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:49:22.485030    4607 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:49:22.485077    4607 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:49:22.489627    4607 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:49:22.496643    4607 start.go:297] selected driver: qemu2
	I1014 07:49:22.496649    4607 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:49:22.496655    4607 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:49:22.498985    4607 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:49:22.502575    4607 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:49:22.505727    4607 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1014 07:49:22.505743    4607 cni.go:84] Creating CNI manager for ""
	I1014 07:49:22.505766    4607 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:49:22.505769    4607 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:49:22.505812    4607 start.go:340] cluster config:
	{Name:docker-flags-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:49:22.509906    4607 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:49:22.518626    4607 out.go:177] * Starting "docker-flags-838000" primary control-plane node in "docker-flags-838000" cluster
	I1014 07:49:22.522628    4607 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:49:22.522643    4607 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:49:22.522649    4607 cache.go:56] Caching tarball of preloaded images
	I1014 07:49:22.522717    4607 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:49:22.522722    4607 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:49:22.522775    4607 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/docker-flags-838000/config.json ...
	I1014 07:49:22.522785    4607 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/docker-flags-838000/config.json: {Name:mkc5404a2a08c9b4587ba2cd251721b5e114954f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:49:22.523074    4607 start.go:360] acquireMachinesLock for docker-flags-838000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:49:24.639286    4607 start.go:364] duration metric: took 2.116182459s to acquireMachinesLock for "docker-flags-838000"
	I1014 07:49:24.639474    4607 start.go:93] Provisioning new machine with config: &{Name:docker-flags-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:49:24.639767    4607 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:49:24.650433    4607 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1014 07:49:24.698784    4607 start.go:159] libmachine.API.Create for "docker-flags-838000" (driver="qemu2")
	I1014 07:49:24.698839    4607 client.go:168] LocalClient.Create starting
	I1014 07:49:24.698979    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:49:24.699048    4607 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:24.699067    4607 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:24.699156    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:49:24.699213    4607 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:24.699229    4607 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:24.699965    4607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:49:24.866557    4607 main.go:141] libmachine: Creating SSH key...
	I1014 07:49:24.900115    4607 main.go:141] libmachine: Creating Disk image...
	I1014 07:49:24.900120    4607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:49:24.900363    4607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2
	I1014 07:49:24.910190    4607 main.go:141] libmachine: STDOUT: 
	I1014 07:49:24.910209    4607 main.go:141] libmachine: STDERR: 
	I1014 07:49:24.910267    4607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2 +20000M
	I1014 07:49:24.918622    4607 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:49:24.918639    4607 main.go:141] libmachine: STDERR: 
	I1014 07:49:24.918662    4607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2
	I1014 07:49:24.918669    4607 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:49:24.918684    4607 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:49:24.918715    4607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:f8:30:82:1d:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2
	I1014 07:49:24.920472    4607 main.go:141] libmachine: STDOUT: 
	I1014 07:49:24.920487    4607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:49:24.920508    4607 client.go:171] duration metric: took 221.663208ms to LocalClient.Create
	I1014 07:49:26.922684    4607 start.go:128] duration metric: took 2.282900083s to createHost
	I1014 07:49:26.922745    4607 start.go:83] releasing machines lock for "docker-flags-838000", held for 2.283418916s
	W1014 07:49:26.922804    4607 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:26.940032    4607 out.go:177] * Deleting "docker-flags-838000" in qemu2 ...
	W1014 07:49:26.972266    4607 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:26.972313    4607 start.go:729] Will try again in 5 seconds ...
	I1014 07:49:31.974476    4607 start.go:360] acquireMachinesLock for docker-flags-838000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:49:32.057561    4607 start.go:364] duration metric: took 82.991459ms to acquireMachinesLock for "docker-flags-838000"
	I1014 07:49:32.057700    4607 start.go:93] Provisioning new machine with config: &{Name:docker-flags-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:49:32.057901    4607 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:49:32.069274    4607 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1014 07:49:32.116780    4607 start.go:159] libmachine.API.Create for "docker-flags-838000" (driver="qemu2")
	I1014 07:49:32.116847    4607 client.go:168] LocalClient.Create starting
	I1014 07:49:32.116972    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:49:32.117028    4607 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:32.117046    4607 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:32.117128    4607 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:49:32.117167    4607 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:32.117183    4607 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:32.117787    4607 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:49:32.283361    4607 main.go:141] libmachine: Creating SSH key...
	I1014 07:49:32.388905    4607 main.go:141] libmachine: Creating Disk image...
	I1014 07:49:32.388911    4607 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:49:32.389146    4607 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2
	I1014 07:49:32.399089    4607 main.go:141] libmachine: STDOUT: 
	I1014 07:49:32.399111    4607 main.go:141] libmachine: STDERR: 
	I1014 07:49:32.399167    4607 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2 +20000M
	I1014 07:49:32.407568    4607 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:49:32.407581    4607 main.go:141] libmachine: STDERR: 
	I1014 07:49:32.407592    4607 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2
	I1014 07:49:32.407597    4607 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:49:32.407607    4607 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:49:32.407637    4607 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9e:5f:6d:7e:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/docker-flags-838000/disk.qcow2
	I1014 07:49:32.409422    4607 main.go:141] libmachine: STDOUT: 
	I1014 07:49:32.409439    4607 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:49:32.409454    4607 client.go:171] duration metric: took 292.605542ms to LocalClient.Create
	I1014 07:49:34.411745    4607 start.go:128] duration metric: took 2.353833875s to createHost
	I1014 07:49:34.411814    4607 start.go:83] releasing machines lock for "docker-flags-838000", held for 2.354252375s
	W1014 07:49:34.412122    4607 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:34.424476    4607 out.go:201] 
	W1014 07:49:34.428699    4607 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:49:34.428732    4607 out.go:270] * 
	* 
	W1014 07:49:34.431498    4607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:49:34.440574    4607 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-838000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-838000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-838000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (86.074333ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-838000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-838000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-838000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-838000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-838000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-838000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-838000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-838000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-838000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (47.659791ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-838000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-838000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-838000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-838000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-838000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-838000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-14 07:49:34.589536 -0700 PDT m=+4296.906054585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-838000 -n docker-flags-838000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-838000 -n docker-flags-838000: exit status 7 (33.932541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-838000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-838000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-838000
--- FAIL: TestDockerFlags (12.31s)

                                                
                                    
x
+
TestForceSystemdFlag (10.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-067000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-067000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.803150833s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-067000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-067000" primary control-plane node in "force-systemd-flag-067000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-067000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:48:57.142141    4488 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:48:57.142304    4488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:48:57.142307    4488 out.go:358] Setting ErrFile to fd 2...
	I1014 07:48:57.142309    4488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:48:57.142462    4488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:48:57.143652    4488 out.go:352] Setting JSON to false
	I1014 07:48:57.161523    4488 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4707,"bootTime":1728912630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:48:57.161589    4488 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:48:57.165711    4488 out.go:177] * [force-systemd-flag-067000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:48:57.170710    4488 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:48:57.170754    4488 notify.go:220] Checking for updates...
	I1014 07:48:57.177625    4488 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:48:57.180699    4488 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:48:57.183696    4488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:48:57.184917    4488 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:48:57.187652    4488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:48:57.191023    4488 config.go:182] Loaded profile config "NoKubernetes-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1014 07:48:57.191106    4488 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:48:57.191161    4488 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:48:57.195546    4488 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:48:57.202675    4488 start.go:297] selected driver: qemu2
	I1014 07:48:57.202682    4488 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:48:57.202688    4488 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:48:57.205207    4488 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:48:57.208736    4488 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:48:57.211750    4488 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 07:48:57.211769    4488 cni.go:84] Creating CNI manager for ""
	I1014 07:48:57.211799    4488 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:48:57.211806    4488 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:48:57.211848    4488 start.go:340] cluster config:
	{Name:force-systemd-flag-067000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-067000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:48:57.216610    4488 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:48:57.224667    4488 out.go:177] * Starting "force-systemd-flag-067000" primary control-plane node in "force-systemd-flag-067000" cluster
	I1014 07:48:57.228679    4488 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:48:57.228700    4488 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:48:57.228710    4488 cache.go:56] Caching tarball of preloaded images
	I1014 07:48:57.228798    4488 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:48:57.228804    4488 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:48:57.228881    4488 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/force-systemd-flag-067000/config.json ...
	I1014 07:48:57.228895    4488 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/force-systemd-flag-067000/config.json: {Name:mk65fec7e6130742969bc318350073b3681528cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:48:57.229301    4488 start.go:360] acquireMachinesLock for force-systemd-flag-067000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:48:57.229358    4488 start.go:364] duration metric: took 46.208µs to acquireMachinesLock for "force-systemd-flag-067000"
	I1014 07:48:57.229372    4488 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-067000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:48:57.229397    4488 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:48:57.233686    4488 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1014 07:48:57.250941    4488 start.go:159] libmachine.API.Create for "force-systemd-flag-067000" (driver="qemu2")
	I1014 07:48:57.250972    4488 client.go:168] LocalClient.Create starting
	I1014 07:48:57.251038    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:48:57.251076    4488 main.go:141] libmachine: Decoding PEM data...
	I1014 07:48:57.251089    4488 main.go:141] libmachine: Parsing certificate...
	I1014 07:48:57.251132    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:48:57.251163    4488 main.go:141] libmachine: Decoding PEM data...
	I1014 07:48:57.251177    4488 main.go:141] libmachine: Parsing certificate...
	I1014 07:48:57.251599    4488 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:48:57.413189    4488 main.go:141] libmachine: Creating SSH key...
	I1014 07:48:57.494021    4488 main.go:141] libmachine: Creating Disk image...
	I1014 07:48:57.494026    4488 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:48:57.494251    4488 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2
	I1014 07:48:57.504056    4488 main.go:141] libmachine: STDOUT: 
	I1014 07:48:57.504080    4488 main.go:141] libmachine: STDERR: 
	I1014 07:48:57.504139    4488 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2 +20000M
	I1014 07:48:57.512781    4488 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:48:57.512796    4488 main.go:141] libmachine: STDERR: 
	I1014 07:48:57.512815    4488 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2
	I1014 07:48:57.512821    4488 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:48:57.512834    4488 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:48:57.512864    4488 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:92:6c:06:0b:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2
	I1014 07:48:57.514615    4488 main.go:141] libmachine: STDOUT: 
	I1014 07:48:57.514629    4488 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:48:57.514649    4488 client.go:171] duration metric: took 263.675917ms to LocalClient.Create
	I1014 07:48:59.516796    4488 start.go:128] duration metric: took 2.287408667s to createHost
	I1014 07:48:59.516873    4488 start.go:83] releasing machines lock for "force-systemd-flag-067000", held for 2.287534791s
	W1014 07:48:59.516958    4488 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:48:59.527941    4488 out.go:177] * Deleting "force-systemd-flag-067000" in qemu2 ...
	W1014 07:48:59.557418    4488 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:48:59.557447    4488 start.go:729] Will try again in 5 seconds ...
	I1014 07:49:04.558938    4488 start.go:360] acquireMachinesLock for force-systemd-flag-067000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:49:04.559543    4488 start.go:364] duration metric: took 463.333µs to acquireMachinesLock for "force-systemd-flag-067000"
	I1014 07:49:04.559626    4488 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-067000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:49:04.559849    4488 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:49:04.569488    4488 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1014 07:49:04.609122    4488 start.go:159] libmachine.API.Create for "force-systemd-flag-067000" (driver="qemu2")
	I1014 07:49:04.609183    4488 client.go:168] LocalClient.Create starting
	I1014 07:49:04.609371    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:49:04.609476    4488 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:04.609500    4488 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:04.609615    4488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:49:04.609683    4488 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:04.609701    4488 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:04.610406    4488 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:49:04.781091    4488 main.go:141] libmachine: Creating SSH key...
	I1014 07:49:04.842091    4488 main.go:141] libmachine: Creating Disk image...
	I1014 07:49:04.842096    4488 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:49:04.842317    4488 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2
	I1014 07:49:04.852337    4488 main.go:141] libmachine: STDOUT: 
	I1014 07:49:04.852351    4488 main.go:141] libmachine: STDERR: 
	I1014 07:49:04.852410    4488 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2 +20000M
	I1014 07:49:04.860795    4488 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:49:04.860810    4488 main.go:141] libmachine: STDERR: 
	I1014 07:49:04.860820    4488 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2
	I1014 07:49:04.860833    4488 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:49:04.860842    4488 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:49:04.860868    4488 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:9c:e4:e9:99:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-flag-067000/disk.qcow2
	I1014 07:49:04.862626    4488 main.go:141] libmachine: STDOUT: 
	I1014 07:49:04.862640    4488 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:49:04.862657    4488 client.go:171] duration metric: took 253.47275ms to LocalClient.Create
	I1014 07:49:06.864807    4488 start.go:128] duration metric: took 2.304959375s to createHost
	I1014 07:49:06.864880    4488 start.go:83] releasing machines lock for "force-systemd-flag-067000", held for 2.305340084s
	W1014 07:49:06.865174    4488 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-067000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-067000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:06.877791    4488 out.go:201] 
	W1014 07:49:06.881815    4488 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:49:06.881838    4488 out.go:270] * 
	* 
	W1014 07:49:06.884693    4488 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:49:06.893703    4488 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-067000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-067000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-067000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.790334ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-067000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-067000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-067000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-14 07:49:06.995543 -0700 PDT m=+4269.311693585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-067000 -n force-systemd-flag-067000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-067000 -n force-systemd-flag-067000: exit status 7 (36.61625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-067000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-067000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-067000
--- FAIL: TestForceSystemdFlag (10.01s)

                                                
                                    
x
+
TestForceSystemdEnv (10.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-455000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
W1014 07:49:12.506584    1497 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1014 07:49:12.506847    1497 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1014 07:49:12.506917    1497 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/001/docker-machine-driver-hyperkit
I1014 07:49:13.059023    1497 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1078a2400 0x1078a2400 0x1078a2400 0x1078a2400 0x1078a2400 0x1078a2400 0x1078a2400] Decompressors:map[bz2:0x14000880658 gz:0x140008806e0 tar:0x14000880690 tar.bz2:0x140008806a0 tar.gz:0x140008806b0 tar.xz:0x140008806c0 tar.zst:0x140008806d0 tbz2:0x140008806a0 tgz:0x140008806b0 txz:0x140008806c0 tzst:0x140008806d0 xz:0x140008806e8 zip:0x140008806f0 zst:0x14000880750] Getters:map[file:0x14001bed4c0 http:0x140008fe5f0 https:0x140008fe640] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1014 07:49:13.059157    1497 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/001/docker-machine-driver-hyperkit
I1014 07:49:16.367126    1497 install.go:79] stdout: 
W1014 07:49:16.367546    1497 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1014 07:49:16.367589    1497 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/001/docker-machine-driver-hyperkit]
I1014 07:49:16.385348    1497 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/001/docker-machine-driver-hyperkit]
I1014 07:49:16.398109    1497 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/001/docker-machine-driver-hyperkit]
I1014 07:49:16.409634    1497 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/001/docker-machine-driver-hyperkit]
I1014 07:49:16.430361    1497 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1014 07:49:16.430507    1497 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1014 07:49:18.228213    1497 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1014 07:49:18.228233    1497 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1014 07:49:18.228283    1497 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1014 07:49:18.228318    1497 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/002/docker-machine-driver-hyperkit
I1014 07:49:18.630520    1497 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1078a2400 0x1078a2400 0x1078a2400 0x1078a2400 0x1078a2400 0x1078a2400 0x1078a2400] Decompressors:map[bz2:0x14000880658 gz:0x140008806e0 tar:0x14000880690 tar.bz2:0x140008806a0 tar.gz:0x140008806b0 tar.xz:0x140008806c0 tar.zst:0x140008806d0 tbz2:0x140008806a0 tgz:0x140008806b0 txz:0x140008806c0 tzst:0x140008806d0 xz:0x140008806e8 zip:0x140008806f0 zst:0x14000880750] Getters:map[file:0x14001c67b00 http:0x1400006e820 https:0x1400006e870] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1014 07:49:18.630650    1497 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/002/docker-machine-driver-hyperkit
I1014 07:49:21.758606    1497 install.go:79] stdout: 
W1014 07:49:21.758814    1497 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1014 07:49:21.758846    1497 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/002/docker-machine-driver-hyperkit]
I1014 07:49:21.775656    1497 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/002/docker-machine-driver-hyperkit]
I1014 07:49:21.788897    1497 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/002/docker-machine-driver-hyperkit]
I1014 07:49:21.799701    1497 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1175637221/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-455000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.025652208s)

                                                
                                                
-- stdout --
	* [force-systemd-env-455000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-455000" primary control-plane node in "force-systemd-env-455000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-455000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:49:12.167888    4555 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:49:12.168039    4555 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:49:12.168042    4555 out.go:358] Setting ErrFile to fd 2...
	I1014 07:49:12.168045    4555 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:49:12.168172    4555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:49:12.169359    4555 out.go:352] Setting JSON to false
	I1014 07:49:12.188137    4555 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4722,"bootTime":1728912630,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:49:12.188218    4555 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:49:12.193257    4555 out.go:177] * [force-systemd-env-455000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:49:12.201467    4555 notify.go:220] Checking for updates...
	I1014 07:49:12.205346    4555 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:49:12.213193    4555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:49:12.221130    4555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:49:12.229268    4555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:49:12.236223    4555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:49:12.243135    4555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1014 07:49:12.247588    4555 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:49:12.247636    4555 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:49:12.251216    4555 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:49:12.258268    4555 start.go:297] selected driver: qemu2
	I1014 07:49:12.258274    4555 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:49:12.258280    4555 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:49:12.261267    4555 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:49:12.265274    4555 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:49:12.269378    4555 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 07:49:12.269395    4555 cni.go:84] Creating CNI manager for ""
	I1014 07:49:12.269428    4555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:49:12.269434    4555 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:49:12.269463    4555 start.go:340] cluster config:
	{Name:force-systemd-env-455000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:49:12.274593    4555 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:49:12.283180    4555 out.go:177] * Starting "force-systemd-env-455000" primary control-plane node in "force-systemd-env-455000" cluster
	I1014 07:49:12.287296    4555 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:49:12.287317    4555 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:49:12.287330    4555 cache.go:56] Caching tarball of preloaded images
	I1014 07:49:12.287436    4555 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:49:12.287443    4555 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:49:12.287515    4555 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/force-systemd-env-455000/config.json ...
	I1014 07:49:12.287529    4555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/force-systemd-env-455000/config.json: {Name:mk458cb509fd0f1bc3aa2341025acad4c411e2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:49:12.287813    4555 start.go:360] acquireMachinesLock for force-systemd-env-455000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:49:12.287875    4555 start.go:364] duration metric: took 50.375µs to acquireMachinesLock for "force-systemd-env-455000"
	I1014 07:49:12.287889    4555 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:49:12.287924    4555 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:49:12.291326    4555 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1014 07:49:12.310804    4555 start.go:159] libmachine.API.Create for "force-systemd-env-455000" (driver="qemu2")
	I1014 07:49:12.310833    4555 client.go:168] LocalClient.Create starting
	I1014 07:49:12.310926    4555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:49:12.310969    4555 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:12.310980    4555 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:12.311016    4555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:49:12.311050    4555 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:12.311061    4555 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:12.311478    4555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:49:12.573794    4555 main.go:141] libmachine: Creating SSH key...
	I1014 07:49:12.671981    4555 main.go:141] libmachine: Creating Disk image...
	I1014 07:49:12.671995    4555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:49:12.672219    4555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2
	I1014 07:49:12.681925    4555 main.go:141] libmachine: STDOUT: 
	I1014 07:49:12.681948    4555 main.go:141] libmachine: STDERR: 
	I1014 07:49:12.682005    4555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2 +20000M
	I1014 07:49:12.690481    4555 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:49:12.690503    4555 main.go:141] libmachine: STDERR: 
	I1014 07:49:12.690520    4555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2
	I1014 07:49:12.690525    4555 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:49:12.690541    4555 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:49:12.690568    4555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:a1:59:6a:01:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2
	I1014 07:49:12.692443    4555 main.go:141] libmachine: STDOUT: 
	I1014 07:49:12.692458    4555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:49:12.692479    4555 client.go:171] duration metric: took 381.645916ms to LocalClient.Create
	I1014 07:49:14.694635    4555 start.go:128] duration metric: took 2.406721667s to createHost
	I1014 07:49:14.694700    4555 start.go:83] releasing machines lock for "force-systemd-env-455000", held for 2.406847666s
	W1014 07:49:14.694751    4555 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:14.709961    4555 out.go:177] * Deleting "force-systemd-env-455000" in qemu2 ...
	W1014 07:49:14.734311    4555 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:14.734344    4555 start.go:729] Will try again in 5 seconds ...
	I1014 07:49:19.736546    4555 start.go:360] acquireMachinesLock for force-systemd-env-455000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:49:19.737055    4555 start.go:364] duration metric: took 365.125µs to acquireMachinesLock for "force-systemd-env-455000"
	I1014 07:49:19.737174    4555 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:49:19.737458    4555 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:49:19.753797    4555 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1014 07:49:19.802883    4555 start.go:159] libmachine.API.Create for "force-systemd-env-455000" (driver="qemu2")
	I1014 07:49:19.802933    4555 client.go:168] LocalClient.Create starting
	I1014 07:49:19.803068    4555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:49:19.803165    4555 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:19.803184    4555 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:19.803247    4555 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:49:19.803308    4555 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:19.803323    4555 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:19.803872    4555 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:49:19.975926    4555 main.go:141] libmachine: Creating SSH key...
	I1014 07:49:20.095318    4555 main.go:141] libmachine: Creating Disk image...
	I1014 07:49:20.095327    4555 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:49:20.095574    4555 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2
	I1014 07:49:20.105840    4555 main.go:141] libmachine: STDOUT: 
	I1014 07:49:20.105871    4555 main.go:141] libmachine: STDERR: 
	I1014 07:49:20.105932    4555 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2 +20000M
	I1014 07:49:20.114371    4555 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:49:20.114391    4555 main.go:141] libmachine: STDERR: 
	I1014 07:49:20.114403    4555 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2
	I1014 07:49:20.114408    4555 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:49:20.114419    4555 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:49:20.114453    4555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:f5:69:3a:e5:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/force-systemd-env-455000/disk.qcow2
	I1014 07:49:20.116338    4555 main.go:141] libmachine: STDOUT: 
	I1014 07:49:20.116354    4555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:49:20.116368    4555 client.go:171] duration metric: took 313.433291ms to LocalClient.Create
	I1014 07:49:22.118626    4555 start.go:128] duration metric: took 2.381163417s to createHost
	I1014 07:49:22.118703    4555 start.go:83] releasing machines lock for "force-systemd-env-455000", held for 2.381652834s
	W1014 07:49:22.119115    4555 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-455000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-455000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:22.132616    4555 out.go:201] 
	W1014 07:49:22.136752    4555 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:49:22.136804    4555 out.go:270] * 
	* 
	W1014 07:49:22.138978    4555 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:49:22.148616    4555 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-455000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-455000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-455000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.500042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-455000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-455000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-455000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-14 07:49:22.239949 -0700 PDT m=+4284.556302585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-455000 -n force-systemd-env-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-455000 -n force-systemd-env-455000: exit status 7 (40.880666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-455000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-455000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-455000
--- FAIL: TestForceSystemdEnv (10.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (34.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-365000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-365000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-8rncf" [79c40ae6-b683-4938-b172-4a358883eaec] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-8rncf" [79c40ae6-b683-4938-b172-4a358883eaec] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.010079667s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31350
functional_test.go:1661: error fetching http://192.168.105.4:31350: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
I1014 06:49:07.668435    1497 retry.go:31] will retry after 1.386733213s: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31350: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
I1014 06:49:09.059020    1497 retry.go:31] will retry after 2.157048658s: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31350: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
I1014 06:49:11.219844    1497 retry.go:31] will retry after 2.076135944s: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31350: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
I1014 06:49:13.298901    1497 retry.go:31] will retry after 3.458445611s: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31350: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
I1014 06:49:16.760241    1497 retry.go:31] will retry after 5.434161869s: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31350: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
I1014 06:49:22.198894    1497 retry.go:31] will retry after 9.903137286s: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31350: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31350: Get "http://192.168.105.4:31350": dial tcp 192.168.105.4:31350: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-365000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-8rncf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-365000/192.168.105.4
Start Time:       Mon, 14 Oct 2024 06:48:58 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://7f20511d7350c9dbea1d0a4381e916faf22b13ca5834083a97e8ed6b3c577f32
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 14 Oct 2024 06:49:18 -0700
Finished:     Mon, 14 Oct 2024 06:49:18 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tksm2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tksm2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  33s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-8rncf to functional-365000
Normal   Pulling    34s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     30s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.22s (3.22s including waiting). Image size: 84957542 bytes.
Normal   Created    14s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 30s)  kubelet            Started container echoserver-arm
Normal   Pulled     14s (x2 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    13s (x3 over 29s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-8rncf_default(79c40ae6-b683-4938-b172-4a358883eaec)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-365000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-365000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.203.220
IPs:                      10.108.203.220
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31350/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-365000 -n functional-365000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-365000 ssh findmnt                                                                                        | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh -- ls                                                                                          | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh cat                                                                                            | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | /mount-9p/test-1728913760928750000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh stat                                                                                           | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh stat                                                                                           | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh sudo                                                                                           | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-365000                                                                                                 | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4184654023/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh findmnt                                                                                        | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh findmnt                                                                                        | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh -- ls                                                                                          | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh sudo                                                                                           | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-365000                                                                                                 | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2064539692/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-365000                                                                                                 | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2064539692/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-365000                                                                                                 | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2064539692/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh findmnt                                                                                        | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh findmnt                                                                                        | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh findmnt                                                                                        | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh findmnt                                                                                        | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh findmnt                                                                                        | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-365000 ssh findmnt                                                                                        | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT | 14 Oct 24 06:49 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-365000                                                                                                 | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-365000                                                                                                 | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-365000                                                                                                 | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-365000 --dry-run                                                                                       | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-365000 | jenkins | v1.34.0 | 14 Oct 24 06:49 PDT |                     |
	|           | -p functional-365000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 06:49:29
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 06:49:29.838001    2083 out.go:345] Setting OutFile to fd 1 ...
	I1014 06:49:29.838187    2083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:49:29.838190    2083 out.go:358] Setting ErrFile to fd 2...
	I1014 06:49:29.838192    2083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:49:29.838352    2083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 06:49:29.839774    2083 out.go:352] Setting JSON to false
	I1014 06:49:29.859260    2083 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1139,"bootTime":1728912630,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 06:49:29.859335    2083 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 06:49:29.863318    2083 out.go:177] * [functional-365000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 06:49:29.871240    2083 notify.go:220] Checking for updates...
	I1014 06:49:29.877287    2083 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 06:49:29.880301    2083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 06:49:29.883330    2083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 06:49:29.886314    2083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 06:49:29.887670    2083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 06:49:29.890316    2083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 06:49:29.893672    2083 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 06:49:29.893924    2083 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 06:49:29.898261    2083 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 06:49:29.905318    2083 start.go:297] selected driver: qemu2
	I1014 06:49:29.905326    2083 start.go:901] validating driver "qemu2" against &{Name:functional-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-365000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 06:49:29.905385    2083 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 06:49:29.907923    2083 cni.go:84] Creating CNI manager for ""
	I1014 06:49:29.907951    2083 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 06:49:29.907998    2083 start.go:340] cluster config:
	{Name:functional-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-365000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 06:49:29.920291    2083 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Oct 14 13:49:24 functional-365000 dockerd[5748]: time="2024-10-14T13:49:24.475343720Z" level=warning msg="cleaning up after shim disconnected" id=6780efec08f0f3d9ce9bb78ac15d5ea6dda183a1f0125851519998ab0fe30c27 namespace=moby
	Oct 14 13:49:24 functional-365000 dockerd[5748]: time="2024-10-14T13:49:24.475348138Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 14 13:49:25 functional-365000 dockerd[5740]: time="2024-10-14T13:49:25.863289524Z" level=info msg="ignoring event" container=24f9b71a8d2ab746af59149e46ed9b29ee3c4c71c928336d72c062120762e645 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 14 13:49:25 functional-365000 dockerd[5748]: time="2024-10-14T13:49:25.863544078Z" level=info msg="shim disconnected" id=24f9b71a8d2ab746af59149e46ed9b29ee3c4c71c928336d72c062120762e645 namespace=moby
	Oct 14 13:49:25 functional-365000 dockerd[5748]: time="2024-10-14T13:49:25.863571797Z" level=warning msg="cleaning up after shim disconnected" id=24f9b71a8d2ab746af59149e46ed9b29ee3c4c71c928336d72c062120762e645 namespace=moby
	Oct 14 13:49:25 functional-365000 dockerd[5748]: time="2024-10-14T13:49:25.863577549Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 14 13:49:29 functional-365000 dockerd[5748]: time="2024-10-14T13:49:29.885900027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 13:49:29 functional-365000 dockerd[5748]: time="2024-10-14T13:49:29.886239571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 13:49:29 functional-365000 dockerd[5748]: time="2024-10-14T13:49:29.886253660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 13:49:29 functional-365000 dockerd[5748]: time="2024-10-14T13:49:29.886290048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 13:49:29 functional-365000 dockerd[5748]: time="2024-10-14T13:49:29.914236088Z" level=info msg="shim disconnected" id=d7fd10ff86873ec89863ef2665d9b37c81fe167488a72eae6b4527befbf26759 namespace=moby
	Oct 14 13:49:29 functional-365000 dockerd[5748]: time="2024-10-14T13:49:29.914270851Z" level=warning msg="cleaning up after shim disconnected" id=d7fd10ff86873ec89863ef2665d9b37c81fe167488a72eae6b4527befbf26759 namespace=moby
	Oct 14 13:49:29 functional-365000 dockerd[5748]: time="2024-10-14T13:49:29.914399733Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 14 13:49:29 functional-365000 dockerd[5740]: time="2024-10-14T13:49:29.914563294Z" level=info msg="ignoring event" container=d7fd10ff86873ec89863ef2665d9b37c81fe167488a72eae6b4527befbf26759 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 14 13:49:30 functional-365000 dockerd[5748]: time="2024-10-14T13:49:30.831404705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 13:49:30 functional-365000 dockerd[5748]: time="2024-10-14T13:49:30.831474482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 13:49:30 functional-365000 dockerd[5748]: time="2024-10-14T13:49:30.831489237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 13:49:30 functional-365000 dockerd[5748]: time="2024-10-14T13:49:30.831552761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 13:49:30 functional-365000 dockerd[5748]: time="2024-10-14T13:49:30.862860026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 13:49:30 functional-365000 dockerd[5748]: time="2024-10-14T13:49:30.862978613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 13:49:30 functional-365000 dockerd[5748]: time="2024-10-14T13:49:30.863005498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 13:49:30 functional-365000 dockerd[5748]: time="2024-10-14T13:49:30.863053807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 13:49:30 functional-365000 cri-dockerd[6001]: time="2024-10-14T13:49:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d2244f7377e90c0c80d528c18dec8329b8da264bc7c999440659168df1359daf/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 13:49:30 functional-365000 cri-dockerd[6001]: time="2024-10-14T13:49:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a28fc156414937f3bee9e98b9a218653f2eb073c78c9dbffdd43de7c80c9ef8d/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 13:49:31 functional-365000 dockerd[5740]: time="2024-10-14T13:49:31.119435303Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" spanID=0df55f19b11bf50b traceID=e8067b563727d338accd896bacab69a2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d7fd10ff86873       72565bf5bbedf                                                                                         3 seconds ago        Exited              echoserver-arm            2                   1bb0a4d0eeadb       hello-node-64b4f8f9ff-89ltn
	6780efec08f0f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 seconds ago        Exited              mount-munger              0                   24f9b71a8d2ab       busybox-mount
	7f20511d7350c       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            2                   073ec9a8b062e       hello-node-connect-65d86f57f4-8rncf
	95ce03f80d2d6       nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0                         27 seconds ago       Running             myfrontend                0                   fa10bce0be94a       sp-pod
	d300549d30015       nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         42 seconds ago       Running             nginx                     0                   ce20bd0bf2323       nginx-svc
	40353bb2ff150       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   4172595bc4dd5       coredns-7c65d6cfc9-c9rqj
	f90a5acade0ad       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   c7e61fe42bd72       kube-proxy-t4fxq
	60d261f3af90e       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   ba6e120370f45       storage-provisioner
	112e08f368253       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   beaf065d9d96e       kube-scheduler-functional-365000
	4b87cf83cac0d       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   0872dfc5447b7       etcd-functional-365000
	b73d6a5e21582       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   52680489a1d4b       kube-controller-manager-functional-365000
	1e0a94623f62f       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   ee82f281e6f3d       kube-apiserver-functional-365000
	55472e218cb1e       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   84aefe19bb36b       storage-provisioner
	52a22024902c2       2f6c962e7b831                                                                                         About a minute ago   Exited              coredns                   1                   f37e94ad87544       coredns-7c65d6cfc9-c9rqj
	4dc7f907763b2       24a140c548c07                                                                                         About a minute ago   Exited              kube-proxy                1                   5c9cb0a88822a       kube-proxy-t4fxq
	c9e386dd5dc44       279f381cb3736                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   013a5a2d20f5c       kube-controller-manager-functional-365000
	d97a890d71c73       7f8aa378bb47d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   8d541d3be7459       kube-scheduler-functional-365000
	01a1d24ec7413       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   9b10267078355       etcd-functional-365000
	
	
	==> coredns [40353bb2ff15] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33151 - 28988 "HINFO IN 1275943966463264188.4477216427319263786. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031566461s
	[INFO] 10.244.0.1:37765 - 24322 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000093121s
	[INFO] 10.244.0.1:16217 - 45653 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000094121s
	[INFO] 10.244.0.1:10526 - 5919 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001077684s
	[INFO] 10.244.0.1:34124 - 12633 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000108419s
	[INFO] 10.244.0.1:46171 - 50704 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.00010796s
	[INFO] 10.244.0.1:13743 - 64049 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000137847s
	
	
	==> coredns [52a22024902c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57591 - 28990 "HINFO IN 4153291182471920367.8333552505299061371. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.400219476s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-365000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-365000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=functional-365000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T06_47_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:46:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-365000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 13:49:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:49:22 +0000   Mon, 14 Oct 2024 13:46:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:49:22 +0000   Mon, 14 Oct 2024 13:46:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:49:22 +0000   Mon, 14 Oct 2024 13:46:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:49:22 +0000   Mon, 14 Oct 2024 13:47:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-365000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 7adbae8376b544249c263c28daf0a155
	  System UUID:                7adbae8376b544249c263c28daf0a155
	  Boot ID:                    05942b85-ecbc-46d2-b50e-842162c5291f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-89ltn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  default                     hello-node-connect-65d86f57f4-8rncf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 coredns-7c65d6cfc9-c9rqj                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m27s
	  kube-system                 etcd-functional-365000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m34s
	  kube-system                 kube-apiserver-functional-365000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-functional-365000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-t4fxq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-functional-365000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-sq7ks    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-9nwtg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 118s                   kube-proxy       
	  Normal  Starting                 2m27s                  kube-proxy       
	  Normal  Starting                 69s                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node functional-365000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node functional-365000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m37s (x7 over 2m37s)  kubelet          Node functional-365000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m33s                  kubelet          Node functional-365000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m33s                  kubelet          Node functional-365000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m33s                  kubelet          Node functional-365000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m29s                  kubelet          Node functional-365000 status is now: NodeReady
	  Normal  RegisteredNode           2m28s                  node-controller  Node functional-365000 event: Registered Node functional-365000 in Controller
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)    kubelet          Node functional-365000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)    kubelet          Node functional-365000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)    kubelet          Node functional-365000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           117s                   node-controller  Node functional-365000 event: Registered Node functional-365000 in Controller
	  Normal  Starting                 74s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s (x8 over 74s)      kubelet          Node functional-365000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s (x8 over 74s)      kubelet          Node functional-365000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s (x7 over 74s)      kubelet          Node functional-365000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                    node-controller  Node functional-365000 event: Registered Node functional-365000 in Controller
	
	
	==> dmesg <==
	[ +15.639440] systemd-fstab-generator[4769]: Ignoring "noauto" option for root device
	[  +0.054259] kauditd_printk_skb: 36 callbacks suppressed
	[Oct14 13:48] systemd-fstab-generator[5264]: Ignoring "noauto" option for root device
	[  +0.051904] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.115823] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[  +0.096828] systemd-fstab-generator[5311]: Ignoring "noauto" option for root device
	[  +0.105307] systemd-fstab-generator[5325]: Ignoring "noauto" option for root device
	[  +5.101433] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.355879] systemd-fstab-generator[5954]: Ignoring "noauto" option for root device
	[  +0.084859] systemd-fstab-generator[5966]: Ignoring "noauto" option for root device
	[  +0.098657] systemd-fstab-generator[5978]: Ignoring "noauto" option for root device
	[  +0.102222] systemd-fstab-generator[5993]: Ignoring "noauto" option for root device
	[  +0.224608] systemd-fstab-generator[6158]: Ignoring "noauto" option for root device
	[  +1.278978] systemd-fstab-generator[6277]: Ignoring "noauto" option for root device
	[  +1.207095] kauditd_printk_skb: 189 callbacks suppressed
	[  +5.564022] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.737576] systemd-fstab-generator[7288]: Ignoring "noauto" option for root device
	[  +5.957618] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.174169] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.847286] kauditd_printk_skb: 11 callbacks suppressed
	[Oct14 13:49] kauditd_printk_skb: 25 callbacks suppressed
	[  +9.564981] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.229596] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.579808] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.445378] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [01a1d24ec741] <==
	{"level":"info","ts":"2024-10-14T13:47:31.975723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-14T13:47:31.975770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-10-14T13:47:31.975808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-10-14T13:47:31.976240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-14T13:47:31.976304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-10-14T13:47:31.976329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-14T13:47:31.978669Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T13:47:31.978675Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-365000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T13:47:31.979588Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T13:47:31.980052Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T13:47:31.980253Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T13:47:31.981257Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T13:47:31.981550Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T13:47:31.983805Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-14T13:47:31.983810Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T13:48:04.527351Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-14T13:48:04.527373Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-365000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-10-14T13:48:04.527405Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-14T13:48:04.527444Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-14T13:48:04.533998Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-14T13:48:04.534022Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-14T13:48:04.535209Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-10-14T13:48:04.537576Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-14T13:48:04.537609Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-14T13:48:04.537613Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-365000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [4b87cf83cac0] <==
	{"level":"info","ts":"2024-10-14T13:48:19.496280Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T13:48:19.497507Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T13:48:19.507823Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-14T13:48:19.507933Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-14T13:48:19.507960Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-14T13:48:19.508000Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-14T13:48:19.508028Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-14T13:48:21.268506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-14T13:48:21.268661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-14T13:48:21.268755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-14T13:48:21.268815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-10-14T13:48:21.268862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-14T13:48:21.268912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-10-14T13:48:21.268933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-14T13:48:21.271820Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T13:48:21.272126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T13:48:21.271811Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-365000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T13:48:21.274307Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T13:48:21.276370Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-14T13:48:21.277233Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T13:48:21.278586Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T13:48:21.279462Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T13:48:21.279567Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-10-14T13:48:58.147541Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.177577ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:48:58.147584Z","caller":"traceutil/trace.go:171","msg":"trace[1617182258] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:713; }","duration":"218.231057ms","start":"2024-10-14T13:48:57.929346Z","end":"2024-10-14T13:48:58.147577Z","steps":["trace[1617182258] 'range keys from in-memory index tree'  (duration: 218.154984ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:49:32 up 2 min,  0 users,  load average: 1.43, 0.59, 0.23
	Linux functional-365000 5.10.207 #1 SMP PREEMPT Tue Oct 8 12:02:09 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1e0a94623f62] <==
	I1014 13:48:21.886699       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1014 13:48:21.886758       1 aggregator.go:171] initial CRD sync complete...
	I1014 13:48:21.886773       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 13:48:21.886780       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 13:48:21.886787       1 cache.go:39] Caches are synced for autoregister controller
	I1014 13:48:21.908343       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 13:48:22.785212       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1014 13:48:22.980796       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I1014 13:48:22.981389       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 13:48:22.983012       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 13:48:23.375891       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 13:48:23.381021       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 13:48:23.397430       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 13:48:23.424154       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 13:48:23.426035       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 13:48:42.456776       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.70.64"}
	I1014 13:48:46.920005       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.185.131"}
	I1014 13:48:58.541404       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 13:48:58.583358       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.203.220"}
	E1014 13:49:03.966612       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49666: use of closed network connection
	E1014 13:49:13.552244       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49677: use of closed network connection
	I1014 13:49:13.634862       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.18.38"}
	I1014 13:49:30.415634       1 controller.go:615] quota admission added evaluator for: namespaces
	I1014 13:49:30.496325       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.9.59"}
	I1014 13:49:30.506734       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.184.150"}
	
	
	==> kube-controller-manager [b73d6a5e2158] <==
	I1014 13:49:13.604496       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="19.757µs"
	I1014 13:49:14.558762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="30.887µs"
	I1014 13:49:15.605026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="48.811µs"
	I1014 13:49:19.691925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="38.974µs"
	I1014 13:49:22.659924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-365000"
	I1014 13:49:29.810285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="25.301µs"
	I1014 13:49:30.443248       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.633246ms"
	E1014 13:49:30.443290       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1014 13:49:30.447898       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.95832ms"
	E1014 13:49:30.447920       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1014 13:49:30.449812       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.849161ms"
	E1014 13:49:30.449830       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1014 13:49:30.453752       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.357761ms"
	E1014 13:49:30.453770       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1014 13:49:30.453796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.515571ms"
	E1014 13:49:30.453816       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1014 13:49:30.461243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.495481ms"
	I1014 13:49:30.473894       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="12.623369ms"
	I1014 13:49:30.480669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.749286ms"
	I1014 13:49:30.480706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="22.091µs"
	I1014 13:49:30.483926       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="8.238387ms"
	I1014 13:49:30.490777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.781882ms"
	I1014 13:49:30.490865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="38.347µs"
	I1014 13:49:30.493538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.173µs"
	I1014 13:49:30.816922       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="31.011µs"
	
	
	==> kube-controller-manager [c9e386dd5dc4] <==
	I1014 13:47:35.867758       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 13:47:35.867779       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 13:47:35.867845       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-365000"
	I1014 13:47:35.867876       1 shared_informer.go:320] Caches are synced for namespace
	I1014 13:47:35.867946       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 13:47:35.868095       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 13:47:35.868947       1 shared_informer.go:320] Caches are synced for job
	I1014 13:47:35.914963       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 13:47:35.930238       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 13:47:35.955579       1 shared_informer.go:320] Caches are synced for taint
	I1014 13:47:35.955676       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 13:47:35.955729       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-365000"
	I1014 13:47:35.955790       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 13:47:35.960363       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 13:47:36.012320       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 13:47:36.013132       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 13:47:36.063459       1 shared_informer.go:320] Caches are synced for disruption
	I1014 13:47:36.065638       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 13:47:36.072003       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 13:47:36.477306       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 13:47:36.529669       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 13:47:36.529726       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 13:47:36.577068       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="21.924658ms"
	I1014 13:47:36.577935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="21.677µs"
	I1014 13:48:03.162116       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-365000"
	
	
	==> kube-proxy [4dc7f907763b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 13:47:34.029308       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 13:47:34.034007       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1014 13:47:34.034031       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 13:47:34.052635       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 13:47:34.052654       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 13:47:34.052669       1 server_linux.go:169] "Using iptables Proxier"
	I1014 13:47:34.054472       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 13:47:34.054555       1 server.go:483] "Version info" version="v1.31.1"
	I1014 13:47:34.054560       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:47:34.055415       1 config.go:199] "Starting service config controller"
	I1014 13:47:34.056852       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 13:47:34.056154       1 config.go:105] "Starting endpoint slice config controller"
	I1014 13:47:34.057056       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 13:47:34.056392       1 config.go:328] "Starting node config controller"
	I1014 13:47:34.057060       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 13:47:34.159081       1 shared_informer.go:320] Caches are synced for node config
	I1014 13:47:34.159118       1 shared_informer.go:320] Caches are synced for service config
	I1014 13:47:34.159127       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f90a5acade0a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 13:48:23.312718       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 13:48:23.317792       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1014 13:48:23.317831       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 13:48:23.375324       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 13:48:23.375345       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 13:48:23.375359       1 server_linux.go:169] "Using iptables Proxier"
	I1014 13:48:23.376105       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 13:48:23.376297       1 server.go:483] "Version info" version="v1.31.1"
	I1014 13:48:23.376399       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:48:23.376886       1 config.go:199] "Starting service config controller"
	I1014 13:48:23.376895       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 13:48:23.376906       1 config.go:105] "Starting endpoint slice config controller"
	I1014 13:48:23.376907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 13:48:23.377104       1 config.go:328] "Starting node config controller"
	I1014 13:48:23.377109       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 13:48:23.477405       1 shared_informer.go:320] Caches are synced for node config
	I1014 13:48:23.477406       1 shared_informer.go:320] Caches are synced for service config
	I1014 13:48:23.477418       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [112e08f36825] <==
	I1014 13:48:20.322142       1 serving.go:386] Generated self-signed cert in-memory
	W1014 13:48:21.791887       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 13:48:21.791924       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 13:48:21.791935       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 13:48:21.791943       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 13:48:21.831641       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 13:48:21.831685       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:48:21.832821       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 13:48:21.832870       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 13:48:21.832878       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 13:48:21.832884       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 13:48:21.934604       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d97a890d71c7] <==
	I1014 13:47:30.558743       1 serving.go:386] Generated self-signed cert in-memory
	W1014 13:47:32.504131       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 13:47:32.504214       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 13:47:32.504248       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 13:47:32.504274       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 13:47:32.544663       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 13:47:32.544681       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:47:32.546036       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 13:47:32.546302       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 13:47:32.546353       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 13:47:32.546379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 13:47:32.646855       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 13:48:04.522809       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1014 13:48:04.522847       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 13:48:04.522890       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1014 13:48:04.523484       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 14 13:49:18 functional-365000 kubelet[6284]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 13:49:18 functional-365000 kubelet[6284]: I1014 13:49:18.872233    6284 scope.go:117] "RemoveContainer" containerID="f1cce742c63ff946e48117e2dd94ef5d85a14f294effa155a1565fe90331b843"
	Oct 14 13:49:18 functional-365000 kubelet[6284]: I1014 13:49:18.878387    6284 scope.go:117] "RemoveContainer" containerID="59119ed9ed32defbc010fdeccdc0d69462489cb8ad12c47ae419c8c625f3b368"
	Oct 14 13:49:19 functional-365000 kubelet[6284]: I1014 13:49:19.683228    6284 scope.go:117] "RemoveContainer" containerID="7f20511d7350c9dbea1d0a4381e916faf22b13ca5834083a97e8ed6b3c577f32"
	Oct 14 13:49:19 functional-365000 kubelet[6284]: E1014 13:49:19.683588    6284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-8rncf_default(79c40ae6-b683-4938-b172-4a358883eaec)\"" pod="default/hello-node-connect-65d86f57f4-8rncf" podUID="79c40ae6-b683-4938-b172-4a358883eaec"
	Oct 14 13:49:22 functional-365000 kubelet[6284]: I1014 13:49:22.677166    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/35c45401-3349-4816-93d1-ae60dc798583-test-volume\") pod \"busybox-mount\" (UID: \"35c45401-3349-4816-93d1-ae60dc798583\") " pod="default/busybox-mount"
	Oct 14 13:49:22 functional-365000 kubelet[6284]: I1014 13:49:22.677199    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwmdd\" (UniqueName: \"kubernetes.io/projected/35c45401-3349-4816-93d1-ae60dc798583-kube-api-access-gwmdd\") pod \"busybox-mount\" (UID: \"35c45401-3349-4816-93d1-ae60dc798583\") " pod="default/busybox-mount"
	Oct 14 13:49:26 functional-365000 kubelet[6284]: I1014 13:49:26.006891    6284 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwmdd\" (UniqueName: \"kubernetes.io/projected/35c45401-3349-4816-93d1-ae60dc798583-kube-api-access-gwmdd\") pod \"35c45401-3349-4816-93d1-ae60dc798583\" (UID: \"35c45401-3349-4816-93d1-ae60dc798583\") "
	Oct 14 13:49:26 functional-365000 kubelet[6284]: I1014 13:49:26.006922    6284 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/35c45401-3349-4816-93d1-ae60dc798583-test-volume\") pod \"35c45401-3349-4816-93d1-ae60dc798583\" (UID: \"35c45401-3349-4816-93d1-ae60dc798583\") "
	Oct 14 13:49:26 functional-365000 kubelet[6284]: I1014 13:49:26.007133    6284 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35c45401-3349-4816-93d1-ae60dc798583-test-volume" (OuterVolumeSpecName: "test-volume") pod "35c45401-3349-4816-93d1-ae60dc798583" (UID: "35c45401-3349-4816-93d1-ae60dc798583"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 14 13:49:26 functional-365000 kubelet[6284]: I1014 13:49:26.011101    6284 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c45401-3349-4816-93d1-ae60dc798583-kube-api-access-gwmdd" (OuterVolumeSpecName: "kube-api-access-gwmdd") pod "35c45401-3349-4816-93d1-ae60dc798583" (UID: "35c45401-3349-4816-93d1-ae60dc798583"). InnerVolumeSpecName "kube-api-access-gwmdd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 14 13:49:26 functional-365000 kubelet[6284]: I1014 13:49:26.107219    6284 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/35c45401-3349-4816-93d1-ae60dc798583-test-volume\") on node \"functional-365000\" DevicePath \"\""
	Oct 14 13:49:26 functional-365000 kubelet[6284]: I1014 13:49:26.107249    6284 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gwmdd\" (UniqueName: \"kubernetes.io/projected/35c45401-3349-4816-93d1-ae60dc798583-kube-api-access-gwmdd\") on node \"functional-365000\" DevicePath \"\""
	Oct 14 13:49:26 functional-365000 kubelet[6284]: I1014 13:49:26.764662    6284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24f9b71a8d2ab746af59149e46ed9b29ee3c4c71c928336d72c062120762e645"
	Oct 14 13:49:29 functional-365000 kubelet[6284]: I1014 13:49:29.794240    6284 scope.go:117] "RemoveContainer" containerID="c617bce9102332e38e3a3f55151c5c9950fda357124391be5bf04385ba7491ca"
	Oct 14 13:49:30 functional-365000 kubelet[6284]: E1014 13:49:30.462484    6284 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35c45401-3349-4816-93d1-ae60dc798583" containerName="mount-munger"
	Oct 14 13:49:30 functional-365000 kubelet[6284]: I1014 13:49:30.462521    6284 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c45401-3349-4816-93d1-ae60dc798583" containerName="mount-munger"
	Oct 14 13:49:30 functional-365000 kubelet[6284]: I1014 13:49:30.649467    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrpdw\" (UniqueName: \"kubernetes.io/projected/e4fe0ba5-5d3d-46ed-b0df-2937e7b2135b-kube-api-access-mrpdw\") pod \"kubernetes-dashboard-695b96c756-9nwtg\" (UID: \"e4fe0ba5-5d3d-46ed-b0df-2937e7b2135b\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-9nwtg"
	Oct 14 13:49:30 functional-365000 kubelet[6284]: I1014 13:49:30.649523    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jjs2\" (UniqueName: \"kubernetes.io/projected/920efe2e-d0d3-43fe-8192-d15d68cf0085-kube-api-access-9jjs2\") pod \"dashboard-metrics-scraper-c5db448b4-sq7ks\" (UID: \"920efe2e-d0d3-43fe-8192-d15d68cf0085\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-sq7ks"
	Oct 14 13:49:30 functional-365000 kubelet[6284]: I1014 13:49:30.649539    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e4fe0ba5-5d3d-46ed-b0df-2937e7b2135b-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-9nwtg\" (UID: \"e4fe0ba5-5d3d-46ed-b0df-2937e7b2135b\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-9nwtg"
	Oct 14 13:49:30 functional-365000 kubelet[6284]: I1014 13:49:30.649548    6284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/920efe2e-d0d3-43fe-8192-d15d68cf0085-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-sq7ks\" (UID: \"920efe2e-d0d3-43fe-8192-d15d68cf0085\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-sq7ks"
	Oct 14 13:49:30 functional-365000 kubelet[6284]: I1014 13:49:30.812071    6284 scope.go:117] "RemoveContainer" containerID="c617bce9102332e38e3a3f55151c5c9950fda357124391be5bf04385ba7491ca"
	Oct 14 13:49:30 functional-365000 kubelet[6284]: I1014 13:49:30.812252    6284 scope.go:117] "RemoveContainer" containerID="d7fd10ff86873ec89863ef2665d9b37c81fe167488a72eae6b4527befbf26759"
	Oct 14 13:49:30 functional-365000 kubelet[6284]: E1014 13:49:30.812320    6284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-89ltn_default(389668eb-b96d-42fa-b946-cafcb83e1517)\"" pod="default/hello-node-64b4f8f9ff-89ltn" podUID="389668eb-b96d-42fa-b946-cafcb83e1517"
	Oct 14 13:49:30 functional-365000 kubelet[6284]: I1014 13:49:30.873898    6284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2244f7377e90c0c80d528c18dec8329b8da264bc7c999440659168df1359daf"
	
	
	==> storage-provisioner [55472e218cb1] <==
	I1014 13:47:49.545156       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 13:47:49.555355       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 13:47:49.555371       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 13:47:49.559242       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 13:47:49.559356       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-365000_99a8ade8-4366-4206-84a4-d50f44a62967!
	I1014 13:47:49.559713       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2df51d84-34b6-403b-88f1-b26409c2a812", APIVersion:"v1", ResourceVersion:"518", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-365000_99a8ade8-4366-4206-84a4-d50f44a62967 became leader
	I1014 13:47:49.659925       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-365000_99a8ade8-4366-4206-84a4-d50f44a62967!
	
	
	==> storage-provisioner [60d261f3af90] <==
	I1014 13:48:23.268883       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 13:48:23.276057       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 13:48:23.276831       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 13:48:40.695628       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 13:48:40.695849       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-365000_8090da53-a871-4f40-9866-602ceaa3ac84!
	I1014 13:48:40.696497       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2df51d84-34b6-403b-88f1-b26409c2a812", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-365000_8090da53-a871-4f40-9866-602ceaa3ac84 became leader
	I1014 13:48:40.797040       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-365000_8090da53-a871-4f40-9866-602ceaa3ac84!
	I1014 13:48:52.775172       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1014 13:48:52.775331       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    4a7935a0-ff89-48e6-8acd-08a64fd64d6b 343 0 2024-10-14 13:47:05 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-14 13:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-580cb1ef-0f21-4060-b737-acce9049477a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  580cb1ef-0f21-4060-b737-acce9049477a 697 0 2024-10-14 13:48:52 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-14 13:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-14 13:48:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1014 13:48:52.776759       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"580cb1ef-0f21-4060-b737-acce9049477a", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1014 13:48:52.776855       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-580cb1ef-0f21-4060-b737-acce9049477a" provisioned
	I1014 13:48:52.777403       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1014 13:48:52.777416       1 volume_store.go:212] Trying to save persistentvolume "pvc-580cb1ef-0f21-4060-b737-acce9049477a"
	I1014 13:48:52.780033       1 volume_store.go:219] persistentvolume "pvc-580cb1ef-0f21-4060-b737-acce9049477a" saved
	I1014 13:48:52.780260       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"580cb1ef-0f21-4060-b737-acce9049477a", APIVersion:"v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-580cb1ef-0f21-4060-b737-acce9049477a
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-365000 -n functional-365000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-365000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-sq7ks kubernetes-dashboard-695b96c756-9nwtg
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-365000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-sq7ks kubernetes-dashboard-695b96c756-9nwtg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-365000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-sq7ks kubernetes-dashboard-695b96c756-9nwtg: exit status 1 (42.118666ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-365000/192.168.105.4
	Start Time:       Mon, 14 Oct 2024 06:49:22 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://6780efec08f0f3d9ce9bb78ac15d5ea6dda183a1f0125851519998ab0fe30c27
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 14 Oct 2024 06:49:24 -0700
	      Finished:     Mon, 14 Oct 2024 06:49:24 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gwmdd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-gwmdd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/busybox-mount to functional-365000
	  Normal  Pulling    11s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.459s (1.459s including waiting). Image size: 3547125 bytes.
	  Normal  Created    9s    kubelet            Created container mount-munger
	  Normal  Started    9s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-sq7ks" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-9nwtg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-365000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-sq7ks kubernetes-dashboard-695b96c756-9nwtg: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (34.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (725.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-063000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E1014 06:51:57.188944    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:52:24.915654    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:46.554289    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:46.562018    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:46.575423    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:46.598819    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:46.642252    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:46.725740    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:46.889227    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:47.212762    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:47.856470    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:49.140306    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:51.704016    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:53:56.827695    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:54:07.071337    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:54:27.553527    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:55:08.534314    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:56:30.463932    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:56:57.209702    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:58:46.575653    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:59:14.306381    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-063000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 52 (12m5.309412291s)

                                                
                                                
-- stdout --
	* [ha-063000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-063000" primary control-plane node in "ha-063000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Deleting "ha-063000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 06:49:41.185768    2257 out.go:345] Setting OutFile to fd 1 ...
	I1014 06:49:41.185944    2257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:49:41.185950    2257 out.go:358] Setting ErrFile to fd 2...
	I1014 06:49:41.185953    2257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:49:41.186083    2257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 06:49:41.187503    2257 out.go:352] Setting JSON to false
	I1014 06:49:41.206698    2257 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1151,"bootTime":1728912630,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 06:49:41.206791    2257 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 06:49:41.210551    2257 out.go:177] * [ha-063000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 06:49:41.217429    2257 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 06:49:41.217511    2257 notify.go:220] Checking for updates...
	I1014 06:49:41.224503    2257 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 06:49:41.227567    2257 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 06:49:41.230544    2257 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 06:49:41.233553    2257 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 06:49:41.236608    2257 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 06:49:41.239775    2257 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 06:49:41.243567    2257 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 06:49:41.250480    2257 start.go:297] selected driver: qemu2
	I1014 06:49:41.250486    2257 start.go:901] validating driver "qemu2" against <nil>
	I1014 06:49:41.250492    2257 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 06:49:41.253361    2257 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 06:49:41.257505    2257 out.go:177] * Automatically selected the socket_vmnet network
	I1014 06:49:41.260521    2257 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 06:49:41.260545    2257 cni.go:84] Creating CNI manager for ""
	I1014 06:49:41.260573    2257 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 06:49:41.260577    2257 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 06:49:41.260609    2257 start.go:340] cluster config:
	{Name:ha-063000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 06:49:41.265550    2257 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 06:49:41.272529    2257 out.go:177] * Starting "ha-063000" primary control-plane node in "ha-063000" cluster
	I1014 06:49:41.276543    2257 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 06:49:41.276561    2257 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 06:49:41.276573    2257 cache.go:56] Caching tarball of preloaded images
	I1014 06:49:41.276700    2257 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 06:49:41.276716    2257 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 06:49:41.276943    2257 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/ha-063000/config.json ...
	I1014 06:49:41.276961    2257 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/ha-063000/config.json: {Name:mk355755cacac760df43c2d92813ce4fee4ef0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 06:49:41.277294    2257 start.go:360] acquireMachinesLock for ha-063000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 06:49:41.277350    2257 start.go:364] duration metric: took 49.417µs to acquireMachinesLock for "ha-063000"
	I1014 06:49:41.277364    2257 start.go:93] Provisioning new machine with config: &{Name:ha-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 06:49:41.277399    2257 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 06:49:41.281543    2257 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 06:49:41.306730    2257 start.go:159] libmachine.API.Create for "ha-063000" (driver="qemu2")
	I1014 06:49:41.306757    2257 client.go:168] LocalClient.Create starting
	I1014 06:49:41.306842    2257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 06:49:41.306880    2257 main.go:141] libmachine: Decoding PEM data...
	I1014 06:49:41.306895    2257 main.go:141] libmachine: Parsing certificate...
	I1014 06:49:41.306927    2257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 06:49:41.306956    2257 main.go:141] libmachine: Decoding PEM data...
	I1014 06:49:41.306965    2257 main.go:141] libmachine: Parsing certificate...
	I1014 06:49:41.307356    2257 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 06:49:41.586995    2257 main.go:141] libmachine: Creating SSH key...
	I1014 06:49:41.855779    2257 main.go:141] libmachine: Creating Disk image...
	I1014 06:49:41.855792    2257 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 06:49:41.856020    2257 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2
	I1014 06:49:41.871978    2257 main.go:141] libmachine: STDOUT: 
	I1014 06:49:41.871998    2257 main.go:141] libmachine: STDERR: 
	I1014 06:49:41.872057    2257 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2 +20000M
	I1014 06:49:41.880821    2257 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 06:49:41.880844    2257 main.go:141] libmachine: STDERR: 
	I1014 06:49:41.880860    2257 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2
	I1014 06:49:41.880865    2257 main.go:141] libmachine: Starting QEMU VM...
	I1014 06:49:41.880877    2257 qemu.go:418] Using hvf for hardware acceleration
	I1014 06:49:41.880907    2257 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:a8:41:3f:b9:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2
	I1014 06:49:41.925004    2257 main.go:141] libmachine: STDOUT: 
	I1014 06:49:41.925034    2257 main.go:141] libmachine: STDERR: 
	I1014 06:49:41.925038    2257 main.go:141] libmachine: Attempt 0
	I1014 06:49:41.925054    2257 main.go:141] libmachine: Searching for 2:a8:41:3f:b9:b3 in /var/db/dhcpd_leases ...
	I1014 06:49:41.925158    2257 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1014 06:49:41.925178    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:49:41.925188    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:49:41.925193    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:49:41.925200    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:49:43.927346    2257 main.go:141] libmachine: Attempt 1
	I1014 06:49:43.927434    2257 main.go:141] libmachine: Searching for 2:a8:41:3f:b9:b3 in /var/db/dhcpd_leases ...
	I1014 06:49:43.928004    2257 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1014 06:49:43.928059    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:49:43.928103    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:49:43.928134    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:49:43.928161    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:49:45.930366    2257 main.go:141] libmachine: Attempt 2
	I1014 06:49:45.930459    2257 main.go:141] libmachine: Searching for 2:a8:41:3f:b9:b3 in /var/db/dhcpd_leases ...
	I1014 06:49:45.931010    2257 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1014 06:49:45.931069    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:49:45.931114    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:49:45.931148    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:49:45.931183    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:49:47.933412    2257 main.go:141] libmachine: Attempt 3
	I1014 06:49:47.933459    2257 main.go:141] libmachine: Searching for 2:a8:41:3f:b9:b3 in /var/db/dhcpd_leases ...
	I1014 06:49:47.933598    2257 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1014 06:49:47.933612    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:49:47.933619    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:49:47.933624    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:49:47.933631    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:49:49.935650    2257 main.go:141] libmachine: Attempt 4
	I1014 06:49:49.935658    2257 main.go:141] libmachine: Searching for 2:a8:41:3f:b9:b3 in /var/db/dhcpd_leases ...
	I1014 06:49:49.935693    2257 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1014 06:49:49.935699    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:49:49.935704    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:49:49.935718    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:49:49.935723    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:49:51.937736    2257 main.go:141] libmachine: Attempt 5
	I1014 06:49:51.937749    2257 main.go:141] libmachine: Searching for 2:a8:41:3f:b9:b3 in /var/db/dhcpd_leases ...
	I1014 06:49:51.937793    2257 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1014 06:49:51.937798    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:49:51.937804    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:49:51.937821    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:49:51.937829    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:49:53.939850    2257 main.go:141] libmachine: Attempt 6
	I1014 06:49:53.939871    2257 main.go:141] libmachine: Searching for 2:a8:41:3f:b9:b3 in /var/db/dhcpd_leases ...
	I1014 06:49:53.939942    2257 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1014 06:49:53.939951    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:49:53.939957    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:49:53.939961    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:49:53.939966    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:49:55.942010    2257 main.go:141] libmachine: Attempt 7
	I1014 06:49:55.942051    2257 main.go:141] libmachine: Searching for 2:a8:41:3f:b9:b3 in /var/db/dhcpd_leases ...
	I1014 06:49:55.942183    2257 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1014 06:49:55.942195    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2:a8:41:3f:b9:b3 ID:1,2:a8:41:3f:b9:b3 Lease:0x670d2f92}
	I1014 06:49:55.942215    2257 main.go:141] libmachine: Found match: 2:a8:41:3f:b9:b3
	I1014 06:49:55.942227    2257 main.go:141] libmachine: IP: 192.168.105.5
	I1014 06:49:55.942232    2257 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I1014 06:55:41.327775    2257 start.go:128] duration metric: took 6m0.031641208s to createHost
	I1014 06:55:41.327856    2257 start.go:83] releasing machines lock for "ha-063000", held for 6m0.031820792s
	W1014 06:55:41.327905    2257 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I1014 06:55:41.339434    2257 out.go:177] * Deleting "ha-063000" in qemu2 ...
	W1014 06:55:41.375196    2257 out.go:270] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1014 06:55:41.375228    2257 start.go:729] Will try again in 5 seconds ...
	I1014 06:55:46.377589    2257 start.go:360] acquireMachinesLock for ha-063000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 06:55:46.378243    2257 start.go:364] duration metric: took 548.5µs to acquireMachinesLock for "ha-063000"
	I1014 06:55:46.378407    2257 start.go:93] Provisioning new machine with config: &{Name:ha-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 06:55:46.378674    2257 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 06:55:46.392344    2257 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 06:55:46.441279    2257 start.go:159] libmachine.API.Create for "ha-063000" (driver="qemu2")
	I1014 06:55:46.441320    2257 client.go:168] LocalClient.Create starting
	I1014 06:55:46.441461    2257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 06:55:46.441543    2257 main.go:141] libmachine: Decoding PEM data...
	I1014 06:55:46.441565    2257 main.go:141] libmachine: Parsing certificate...
	I1014 06:55:46.441638    2257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 06:55:46.441697    2257 main.go:141] libmachine: Decoding PEM data...
	I1014 06:55:46.441713    2257 main.go:141] libmachine: Parsing certificate...
	I1014 06:55:46.442302    2257 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 06:55:46.610192    2257 main.go:141] libmachine: Creating SSH key...
	I1014 06:55:46.840773    2257 main.go:141] libmachine: Creating Disk image...
	I1014 06:55:46.840782    2257 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 06:55:46.841023    2257 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2
	I1014 06:55:46.851483    2257 main.go:141] libmachine: STDOUT: 
	I1014 06:55:46.851515    2257 main.go:141] libmachine: STDERR: 
	I1014 06:55:46.851576    2257 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2 +20000M
	I1014 06:55:46.860085    2257 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 06:55:46.860110    2257 main.go:141] libmachine: STDERR: 
	I1014 06:55:46.860125    2257 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2
	I1014 06:55:46.860133    2257 main.go:141] libmachine: Starting QEMU VM...
	I1014 06:55:46.860141    2257 qemu.go:418] Using hvf for hardware acceleration
	I1014 06:55:46.860176    2257 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d1:32:1e:c7:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2
	I1014 06:55:46.896720    2257 main.go:141] libmachine: STDOUT: 
	I1014 06:55:46.896746    2257 main.go:141] libmachine: STDERR: 
	I1014 06:55:46.896750    2257 main.go:141] libmachine: Attempt 0
	I1014 06:55:46.896765    2257 main.go:141] libmachine: Searching for 4a:d1:32:1e:c7:be in /var/db/dhcpd_leases ...
	I1014 06:55:46.896885    2257 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1014 06:55:46.896901    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2:a8:41:3f:b9:b3 ID:1,2:a8:41:3f:b9:b3 Lease:0x670d2f92}
	I1014 06:55:46.896909    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:55:46.896915    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:55:46.896921    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:55:46.896928    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:55:48.899187    2257 main.go:141] libmachine: Attempt 1
	I1014 06:55:48.899258    2257 main.go:141] libmachine: Searching for 4a:d1:32:1e:c7:be in /var/db/dhcpd_leases ...
	I1014 06:55:48.899846    2257 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1014 06:55:48.899899    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2:a8:41:3f:b9:b3 ID:1,2:a8:41:3f:b9:b3 Lease:0x670d2f92}
	I1014 06:55:48.899935    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:55:48.899966    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:55:48.899996    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:55:48.900023    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:55:50.902326    2257 main.go:141] libmachine: Attempt 2
	I1014 06:55:50.902403    2257 main.go:141] libmachine: Searching for 4a:d1:32:1e:c7:be in /var/db/dhcpd_leases ...
	I1014 06:55:50.902794    2257 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1014 06:55:50.902850    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2:a8:41:3f:b9:b3 ID:1,2:a8:41:3f:b9:b3 Lease:0x670d2f92}
	I1014 06:55:50.902880    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:55:50.902908    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:55:50.902941    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:55:50.902971    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:55:52.904154    2257 main.go:141] libmachine: Attempt 3
	I1014 06:55:52.904188    2257 main.go:141] libmachine: Searching for 4a:d1:32:1e:c7:be in /var/db/dhcpd_leases ...
	I1014 06:55:52.904334    2257 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1014 06:55:52.904349    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2:a8:41:3f:b9:b3 ID:1,2:a8:41:3f:b9:b3 Lease:0x670d2f92}
	I1014 06:55:52.904361    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:55:52.904367    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:55:52.904372    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:55:52.904378    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:55:54.906454    2257 main.go:141] libmachine: Attempt 4
	I1014 06:55:54.906460    2257 main.go:141] libmachine: Searching for 4a:d1:32:1e:c7:be in /var/db/dhcpd_leases ...
	I1014 06:55:54.906502    2257 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1014 06:55:54.906509    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2:a8:41:3f:b9:b3 ID:1,2:a8:41:3f:b9:b3 Lease:0x670d2f92}
	I1014 06:55:54.906524    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:55:54.906530    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:55:54.906535    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:55:54.906540    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:55:56.908595    2257 main.go:141] libmachine: Attempt 5
	I1014 06:55:56.908605    2257 main.go:141] libmachine: Searching for 4a:d1:32:1e:c7:be in /var/db/dhcpd_leases ...
	I1014 06:55:56.908635    2257 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1014 06:55:56.908641    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2:a8:41:3f:b9:b3 ID:1,2:a8:41:3f:b9:b3 Lease:0x670d2f92}
	I1014 06:55:56.908645    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:55:56.908665    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:55:56.908671    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:55:56.908676    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:55:58.910767    2257 main.go:141] libmachine: Attempt 6
	I1014 06:55:58.910787    2257 main.go:141] libmachine: Searching for 4a:d1:32:1e:c7:be in /var/db/dhcpd_leases ...
	I1014 06:55:58.910871    2257 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1014 06:55:58.910880    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2:a8:41:3f:b9:b3 ID:1,2:a8:41:3f:b9:b3 Lease:0x670d2f92}
	I1014 06:55:58.910894    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:fa:bc:2a:c:32 ID:1,da:fa:bc:2a:c:32 Lease:0x670d2ece}
	I1014 06:55:58.910900    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:6a:34:60:ba:b2:b2 ID:1,6a:34:60:ba:b2:b2 Lease:0x670d207c}
	I1014 06:55:58.910906    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:b6:b1:74:0:95:d7 ID:1,b6:b1:74:0:95:d7 Lease:0x670d2053}
	I1014 06:55:58.910911    2257 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x670d2b14}
	I1014 06:56:00.913021    2257 main.go:141] libmachine: Attempt 7
	I1014 06:56:00.913047    2257 main.go:141] libmachine: Searching for 4a:d1:32:1e:c7:be in /var/db/dhcpd_leases ...
	I1014 06:56:00.913187    2257 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1014 06:56:00.913200    2257 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:4a:d1:32:1e:c7:be ID:1,4a:d1:32:1e:c7:be Lease:0x670d30ff}
	I1014 06:56:00.913203    2257 main.go:141] libmachine: Found match: 4a:d1:32:1e:c7:be
	I1014 06:56:00.913215    2257 main.go:141] libmachine: IP: 192.168.105.6
	I1014 06:56:00.913222    2257 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I1014 07:01:46.440848    2257 start.go:128] duration metric: took 6m0.064889083s to createHost
	I1014 07:01:46.440929    2257 start.go:83] releasing machines lock for "ha-063000", held for 6m0.06541475s
	W1014 07:01:46.441172    2257 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-063000" may fix it: creating host: create host timed out in 360.000000 seconds
	* Failed to start qemu2 VM. Running "minikube delete -p ha-063000" may fix it: creating host: create host timed out in 360.000000 seconds
	I1014 07:01:46.448737    2257 out.go:201] 
	W1014 07:01:46.452868    2257 out.go:270] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	W1014 07:01:46.452931    2257 out.go:270] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1014 07:01:46.452991    2257 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1014 07:01:46.459813    2257 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-063000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000: exit status 7 (71.67275ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 07:01:46.552927    2612 status.go:393] failed to get driver ip: parsing IP: 
	E1014 07:01:46.552937    2612 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-063000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StartCluster (725.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (91.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (64.031125ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-063000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- rollout status deployment/busybox: exit status 1 (62.957375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.471042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:01:46.743618    1497 retry.go:31] will retry after 740.077755ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.231ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:01:47.594445    1497 retry.go:31] will retry after 1.721064146s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.084625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:01:49.428946    1497 retry.go:31] will retry after 2.980244521s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.907458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:01:52.521426    1497 retry.go:31] will retry after 2.682903442s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.247959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:01:55.315883    1497 retry.go:31] will retry after 3.138207294s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1014 07:01:57.205246    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.36675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:01:58.564752    1497 retry.go:31] will retry after 4.857809989s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.962417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:02:03.535902    1497 retry.go:31] will retry after 9.315737172s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.610417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:02:12.961561    1497 retry.go:31] will retry after 14.154619216s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.388625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:02:27.227868    1497 retry.go:31] will retry after 22.162300855s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.221333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:02:49.501731    1497 retry.go:31] will retry after 27.698187139s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.302666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.647ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.542958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.958208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.776834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000: exit status 7 (36.762292ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 07:03:17.593325    2670 status.go:393] failed to get driver ip: parsing IP: 
	E1014 07:03:17.593334    2670 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-063000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DeployApp (91.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-063000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (63.3025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-063000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000: exit status 7 (35.575375ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 07:03:17.693841    2675 status.go:393] failed to get driver ip: parsing IP: 
	E1014 07:03:17.693846    2675 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-063000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-063000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-063000 -v=7 --alsologtostderr: exit status 50 (53.194875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:03:17.728022    2677 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:03:17.728298    2677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:03:17.728302    2677 out.go:358] Setting ErrFile to fd 2...
	I1014 07:03:17.728304    2677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:03:17.728434    2677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:03:17.728680    2677 mustload.go:65] Loading cluster: ha-063000
	I1014 07:03:17.728880    2677 config.go:182] Loaded profile config "ha-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:03:17.729567    2677 host.go:66] Checking if "ha-063000" exists ...
	I1014 07:03:17.734192    2677 out.go:201] 
	W1014 07:03:17.738168    2677 out.go:270] X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-063000 endpoint: failed to lookup ip for ""
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-063000 endpoint: failed to lookup ip for ""
	W1014 07:03:17.738182    2677 out.go:270] * Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	I1014 07:03:17.743106    2677 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-063000 -v=7 --alsologtostderr" : exit status 50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000: exit status 7 (35.244625ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 07:03:17.782612    2679 status.go:393] failed to get driver ip: parsing IP: 
	E1014 07:03:17.782619    2679 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-063000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-063000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-063000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.281625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-063000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-063000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-063000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000: exit status 7 (35.403458ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 07:03:17.845560    2682 status.go:393] failed to get driver ip: parsing IP: 
	E1014 07:03:17.845569    2682 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-063000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-063000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-063000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-063000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-063000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-063000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-063000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-063000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-063000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000: exit status 7 (34.790792ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 07:03:17.933281    2687 status.go:393] failed to get driver ip: parsing IP: 
	E1014 07:03:17.933289    2687 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-063000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-063000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-063000 node stop m02 -v=7 --alsologtostderr: exit status 85 (51.455875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:03:18.002293    2691 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:03:18.002595    2691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:03:18.002598    2691 out.go:358] Setting ErrFile to fd 2...
	I1014 07:03:18.002601    2691 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:03:18.002726    2691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:03:18.002999    2691 mustload.go:65] Loading cluster: ha-063000
	I1014 07:03:18.003230    2691 config.go:182] Loaded profile config "ha-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:03:18.007188    2691 out.go:201] 
	W1014 07:03:18.010106    2691 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1014 07:03:18.010111    2691 out.go:270] * 
	* 
	W1014 07:03:18.011649    2691 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:03:18.016165    2691 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-063000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-063000 status -v=7 --alsologtostderr
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-063000 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-063000 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-063000 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-063000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000: exit status 7 (34.429ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 07:03:18.090021    2695 status.go:393] failed to get driver ip: parsing IP: 
	E1014 07:03:18.090027    2695 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-063000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-063000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-063000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-063000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-063000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000: exit status 7 (34.675583ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 07:03:18.178011    2700 status.go:393] failed to get driver ip: parsing IP: 
	E1014 07:03:18.178019    2700 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-063000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-063000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-063000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.028042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:03:18.210951    2702 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:03:18.211214    2702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:03:18.211217    2702 out.go:358] Setting ErrFile to fd 2...
	I1014 07:03:18.211220    2702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:03:18.211344    2702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:03:18.211599    2702 mustload.go:65] Loading cluster: ha-063000
	I1014 07:03:18.211793    2702 config.go:182] Loaded profile config "ha-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:03:18.217151    2702 out.go:201] 
	W1014 07:03:18.220153    2702 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1014 07:03:18.220163    2702 out.go:270] * 
	* 
	W1014 07:03:18.221553    2702 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:03:18.224152    2702 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1014 07:03:18.210951    2702 out.go:345] Setting OutFile to fd 1 ...
I1014 07:03:18.211214    2702 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:03:18.211217    2702 out.go:358] Setting ErrFile to fd 2...
I1014 07:03:18.211220    2702 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:03:18.211344    2702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
I1014 07:03:18.211599    2702 mustload.go:65] Loading cluster: ha-063000
I1014 07:03:18.211793    2702 config.go:182] Loaded profile config "ha-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:03:18.217151    2702 out.go:201] 
W1014 07:03:18.220153    2702 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1014 07:03:18.220163    2702 out.go:270] * 
* 
W1014 07:03:18.221553    2702 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1014 07:03:18.224152    2702 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-063000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-063000 status -v=7 --alsologtostderr
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-063000 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-063000 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-063000 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-063000 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
ha_test.go:450: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (33.775459ms)

                                                
                                                
** stderr ** 
	E1014 07:03:18.294743    2706 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1014 07:03:18.295157    2706 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1014 07:03:18.296312    2706 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1014 07:03:18.296594    2706 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1014 07:03:18.297768    2706 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	The connection to the server localhost:8080 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ha_test.go:452: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000: exit status 7 (34.93275ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 07:03:18.332606    2707 status.go:393] failed to get driver ip: parsing IP: 
	E1014 07:03:18.332617    2707 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-063000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-063000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-063000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-063000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-063000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-063000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-063000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-063000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-063000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000: exit status 7 (35.582125ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 07:03:18.420974    2712 status.go:393] failed to get driver ip: parsing IP: 
	E1014 07:03:18.420985    2712 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-063000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (982.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-063000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-063000 -v=7 --alsologtostderr
E1014 07:03:20.297542    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-063000 -v=7 --alsologtostderr: (6.522034417s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-063000 --wait=true -v=7 --alsologtostderr
E1014 07:03:46.572675    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:06:57.203831    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:08:46.569845    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:10:09.663839    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:11:57.201011    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:13:46.567062    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:16:57.198236    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:18:46.564286    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-063000 --wait=true -v=7 --alsologtostderr: signal: killed (16m16.192307083s)

                                                
                                                
-- stdout --
	* [ha-063000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-063000" primary control-plane node in "ha-063000" cluster
	* Restarting existing qemu2 VM for "ha-063000" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:03:25.044929    2735 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:03:25.045121    2735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:03:25.045128    2735 out.go:358] Setting ErrFile to fd 2...
	I1014 07:03:25.045131    2735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:03:25.045292    2735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:03:25.046559    2735 out.go:352] Setting JSON to false
	I1014 07:03:25.066259    2735 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1975,"bootTime":1728912630,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:03:25.066327    2735 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:03:25.070539    2735 out.go:177] * [ha-063000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:03:25.077621    2735 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:03:25.077688    2735 notify.go:220] Checking for updates...
	I1014 07:03:25.083532    2735 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:03:25.086576    2735 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:03:25.089518    2735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:03:25.092544    2735 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:03:25.095606    2735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:03:25.098828    2735 config.go:182] Loaded profile config "ha-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:03:25.098873    2735 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:03:25.103473    2735 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:03:25.109400    2735 start.go:297] selected driver: qemu2
	I1014 07:03:25.109406    2735 start.go:901] validating driver "qemu2" against &{Name:ha-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:03:25.109457    2735 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:03:25.112249    2735 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:03:25.112299    2735 cni.go:84] Creating CNI manager for ""
	I1014 07:03:25.112319    2735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:03:25.112380    2735 start.go:340] cluster config:
	{Name:ha-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-063000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:03:25.117520    2735 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:03:25.125537    2735 out.go:177] * Starting "ha-063000" primary control-plane node in "ha-063000" cluster
	I1014 07:03:25.129452    2735 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:03:25.129467    2735 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:03:25.129474    2735 cache.go:56] Caching tarball of preloaded images
	I1014 07:03:25.129539    2735 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:03:25.129544    2735 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:03:25.129592    2735 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/ha-063000/config.json ...
	I1014 07:03:25.130010    2735 start.go:360] acquireMachinesLock for ha-063000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:03:25.130059    2735 start.go:364] duration metric: took 39.75µs to acquireMachinesLock for "ha-063000"
	I1014 07:03:25.130068    2735 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:03:25.130072    2735 fix.go:54] fixHost starting: 
	I1014 07:03:25.130198    2735 fix.go:112] recreateIfNeeded on ha-063000: state=Stopped err=<nil>
	W1014 07:03:25.130204    2735 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:03:25.138540    2735 out.go:177] * Restarting existing qemu2 VM for "ha-063000" ...
	I1014 07:03:25.142513    2735 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:03:25.142555    2735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d1:32:1e:c7:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/ha-063000/disk.qcow2
	I1014 07:03:25.182579    2735 main.go:141] libmachine: STDOUT: 
	I1014 07:03:25.182603    2735 main.go:141] libmachine: STDERR: 
	I1014 07:03:25.182607    2735 main.go:141] libmachine: Attempt 0
	I1014 07:03:25.182616    2735 main.go:141] libmachine: Searching for 4a:d1:32:1e:c7:be in /var/db/dhcpd_leases ...
	I1014 07:03:25.182690    2735 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1014 07:03:25.182708    2735 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:4a:d1:32:1e:c7:be ID:1,4a:d1:32:1e:c7:be Lease:0x670d24aa}
	I1014 07:03:25.182716    2735 main.go:141] libmachine: Found match: 4a:d1:32:1e:c7:be
	I1014 07:03:25.182723    2735 main.go:141] libmachine: IP: 192.168.105.6
	I1014 07:03:25.182728    2735 main.go:141] libmachine: Waiting for VM to start (ssh -p 0 docker@192.168.105.6)...

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-063000 -v=7 --alsologtostderr" : signal: killed
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-063000
ha_test.go:474: (dbg) Non-zero exit: out/minikube-darwin-arm64 node list -p ha-063000: context deadline exceeded (667ns)
ha_test.go:476: failed to run node list. args "out/minikube-darwin-arm64 node list -p ha-063000" : context deadline exceeded
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-063000	

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-063000 -n ha-063000: exit status 7 (35.647625ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 07:19:41.196564    2802 status.go:393] failed to get driver ip: parsing IP: 
	E1014 07:19:41.196569    2802 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-063000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (982.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (725.28s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-467000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E1014 07:21:57.156690    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:23:46.516128    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:26:49.610171    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:26:57.144786    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:28:46.509339    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:31:57.137889    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-467000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 52 (12m5.275050583s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"26f33d8e-8fec-4e3c-aa85-057713bec91b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-467000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"31f16c4a-8b3a-4406-9df6-989a8c393d1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19790"}}
	{"specversion":"1.0","id":"0eb8793f-80db-4c80-9bd4-32e38c486647","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig"}}
	{"specversion":"1.0","id":"9de4eb9c-a781-49c4-89c4-057c5990d2ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ba5b3ea1-11d0-495a-b9a0-6e119e17e1be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9edb5803-ce79-41da-8da3-256a49bee8cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube"}}
	{"specversion":"1.0","id":"141470c5-5452-41be-9bbc-4048cfc55f8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7c14ff8d-ba72-4a68-b217-f94485ebdd56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"64716050-0941-40ef-bad0-fac62bb2ac65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9e9683ac-b390-48e5-9385-2966df7763bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-467000\" primary control-plane node in \"json-output-467000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2bd2bff3-9ab9-4e3e-b01d-3be197e408e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"67bf4a51-b945-4d03-bbb6-c2a80833520a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-467000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"9283f87a-a9dd-4ef0-9d01-3796c79dd648","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"1a46ec53-11e5-4148-ab89-63fc91518098","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4883980-880f-49ac-86fb-87e5e10b5001","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-467000\" may fix it: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"5f8ae7a7-31b8-43c0-81a3-65194199a4c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try 'minikube delete', and disable any conflicting VPN or firewall software","exitcode":"52","issues":"https://github.com/kubernetes/minikube/issues/7072","message":"Failed to start host: creating host: create host timed out in 360.000000 seconds","name":"DRV_CREATE_TIMEOUT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-467000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 52
--- FAIL: TestJSONOutput/start/Command (725.28s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 9 has already been assigned to another step:
Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
Cannot use for:
Deleting "json-output-467000" in qemu2 ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 26f33d8e-8fec-4e3c-aa85-057713bec91b
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-467000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 31f16c4a-8b3a-4406-9df6-989a8c393d1f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19790"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 0eb8793f-80db-4c80-9bd4-32e38c486647
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9de4eb9c-a781-49c4-89c4-057c5990d2ec
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: ba5b3ea1-11d0-495a-b9a0-6e119e17e1be
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9edb5803-ce79-41da-8da3-256a49bee8cc
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 141470c5-5452-41be-9bbc-4048cfc55f8d
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7c14ff8d-ba72-4a68-b217-f94485ebdd56
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 64716050-0941-40ef-bad0-fac62bb2ac65
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9e9683ac-b390-48e5-9385-2966df7763bd
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-467000\" primary control-plane node in \"json-output-467000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2bd2bff3-9ab9-4e3e-b01d-3be197e408e8
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 67bf4a51-b945-4d03-bbb6-c2a80833520a
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-467000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 9283f87a-a9dd-4ef0-9d01-3796c79dd648
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 1a46ec53-11e5-4148-ab89-63fc91518098
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: c4883980-880f-49ac-86fb-87e5e10b5001
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-467000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5f8ae7a7-31b8-43c0-81a3-65194199a4c6
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 26f33d8e-8fec-4e3c-aa85-057713bec91b
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-467000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 31f16c4a-8b3a-4406-9df6-989a8c393d1f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19790"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 0eb8793f-80db-4c80-9bd4-32e38c486647
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9de4eb9c-a781-49c4-89c4-057c5990d2ec
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: ba5b3ea1-11d0-495a-b9a0-6e119e17e1be
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9edb5803-ce79-41da-8da3-256a49bee8cc
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 141470c5-5452-41be-9bbc-4048cfc55f8d
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7c14ff8d-ba72-4a68-b217-f94485ebdd56
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 64716050-0941-40ef-bad0-fac62bb2ac65
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9e9683ac-b390-48e5-9385-2966df7763bd
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-467000\" primary control-plane node in \"json-output-467000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2bd2bff3-9ab9-4e3e-b01d-3be197e408e8
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 67bf4a51-b945-4d03-bbb6-c2a80833520a
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-467000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 9283f87a-a9dd-4ef0-9d01-3796c79dd648
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 1a46ec53-11e5-4148-ab89-63fc91518098
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: c4883980-880f-49ac-86fb-87e5e10b5001
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-467000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5f8ae7a7-31b8-43c0-81a3-65194199a4c6
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-467000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-467000 --output=json --user=testUser: exit status 50 (91.017ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"20c428ed-1180-4d09-8b94-e850c87a8cb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Recreate the cluster by running:\n\t\tminikube delete {{.profileArg}}\n\t\tminikube start {{.profileArg}}","exitcode":"50","issues":"","message":"Unable to get control-plane node json-output-467000 endpoint: failed to lookup ip for \"\"","name":"DRV_CP_ENDPOINT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-467000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-467000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-467000 --output=json --user=testUser: exit status 50 (59.9195ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node json-output-467000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-467000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/unpause/Command (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-855000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E1014 07:33:46.502928    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-855000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.068150292s)

                                                
                                                
-- stdout --
	* [mount-start-1-855000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-855000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-855000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-855000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-855000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-855000 -n mount-start-1-855000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-855000 -n mount-start-1-855000: exit status 7 (76.560625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-855000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.15s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-613000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-613000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.787270958s)

                                                
                                                
-- stdout --
	* [multinode-613000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-613000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:33:51.227436    3396 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:33:51.227588    3396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:33:51.227591    3396 out.go:358] Setting ErrFile to fd 2...
	I1014 07:33:51.227594    3396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:33:51.227718    3396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:33:51.228850    3396 out.go:352] Setting JSON to false
	I1014 07:33:51.246498    3396 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3801,"bootTime":1728912630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:33:51.246569    3396 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:33:51.252661    3396 out.go:177] * [multinode-613000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:33:51.260792    3396 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:33:51.260821    3396 notify.go:220] Checking for updates...
	I1014 07:33:51.267671    3396 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:33:51.270681    3396 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:33:51.273690    3396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:33:51.276654    3396 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:33:51.279690    3396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:33:51.282798    3396 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:33:51.286642    3396 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:33:51.293658    3396 start.go:297] selected driver: qemu2
	I1014 07:33:51.293666    3396 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:33:51.293677    3396 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:33:51.296147    3396 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:33:51.299633    3396 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:33:51.302787    3396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:33:51.302809    3396 cni.go:84] Creating CNI manager for ""
	I1014 07:33:51.302841    3396 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 07:33:51.302846    3396 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:33:51.302882    3396 start.go:340] cluster config:
	{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:33:51.307457    3396 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:33:51.315718    3396 out.go:177] * Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	I1014 07:33:51.319487    3396 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:33:51.319501    3396 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:33:51.319509    3396 cache.go:56] Caching tarball of preloaded images
	I1014 07:33:51.319577    3396 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:33:51.319583    3396 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:33:51.319785    3396 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/multinode-613000/config.json ...
	I1014 07:33:51.319799    3396 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/multinode-613000/config.json: {Name:mk7c488aab86ce4831f3c1f12dd3425fb565632d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:33:51.320171    3396 start.go:360] acquireMachinesLock for multinode-613000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:33:51.320219    3396 start.go:364] duration metric: took 42.917µs to acquireMachinesLock for "multinode-613000"
	I1014 07:33:51.320234    3396 start.go:93] Provisioning new machine with config: &{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:33:51.320261    3396 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:33:51.327675    3396 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:33:51.344882    3396 start.go:159] libmachine.API.Create for "multinode-613000" (driver="qemu2")
	I1014 07:33:51.344913    3396 client.go:168] LocalClient.Create starting
	I1014 07:33:51.344978    3396 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:33:51.345014    3396 main.go:141] libmachine: Decoding PEM data...
	I1014 07:33:51.345025    3396 main.go:141] libmachine: Parsing certificate...
	I1014 07:33:51.345061    3396 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:33:51.345089    3396 main.go:141] libmachine: Decoding PEM data...
	I1014 07:33:51.345097    3396 main.go:141] libmachine: Parsing certificate...
	I1014 07:33:51.345462    3396 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:33:51.500405    3396 main.go:141] libmachine: Creating SSH key...
	I1014 07:33:51.559542    3396 main.go:141] libmachine: Creating Disk image...
	I1014 07:33:51.559548    3396 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:33:51.559738    3396 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2
	I1014 07:33:51.569567    3396 main.go:141] libmachine: STDOUT: 
	I1014 07:33:51.569582    3396 main.go:141] libmachine: STDERR: 
	I1014 07:33:51.569649    3396 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2 +20000M
	I1014 07:33:51.578174    3396 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:33:51.578189    3396 main.go:141] libmachine: STDERR: 
	I1014 07:33:51.578200    3396 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2
	I1014 07:33:51.578218    3396 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:33:51.578234    3396 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:33:51.578260    3396 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:7e:18:41:b7:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2
	I1014 07:33:51.580035    3396 main.go:141] libmachine: STDOUT: 
	I1014 07:33:51.580050    3396 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:33:51.580078    3396 client.go:171] duration metric: took 235.164041ms to LocalClient.Create
	I1014 07:33:53.582254    3396 start.go:128] duration metric: took 2.262012292s to createHost
	I1014 07:33:53.582340    3396 start.go:83] releasing machines lock for "multinode-613000", held for 2.2621615s
	W1014 07:33:53.582408    3396 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:33:53.597754    3396 out.go:177] * Deleting "multinode-613000" in qemu2 ...
	W1014 07:33:53.622355    3396 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:33:53.622387    3396 start.go:729] Will try again in 5 seconds ...
	I1014 07:33:58.624528    3396 start.go:360] acquireMachinesLock for multinode-613000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:33:58.625157    3396 start.go:364] duration metric: took 511.916µs to acquireMachinesLock for "multinode-613000"
	I1014 07:33:58.625310    3396 start.go:93] Provisioning new machine with config: &{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:33:58.625622    3396 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:33:58.639313    3396 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:33:58.692284    3396 start.go:159] libmachine.API.Create for "multinode-613000" (driver="qemu2")
	I1014 07:33:58.692351    3396 client.go:168] LocalClient.Create starting
	I1014 07:33:58.692553    3396 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:33:58.692646    3396 main.go:141] libmachine: Decoding PEM data...
	I1014 07:33:58.692672    3396 main.go:141] libmachine: Parsing certificate...
	I1014 07:33:58.692742    3396 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:33:58.692820    3396 main.go:141] libmachine: Decoding PEM data...
	I1014 07:33:58.692838    3396 main.go:141] libmachine: Parsing certificate...
	I1014 07:33:58.693519    3396 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:33:58.863319    3396 main.go:141] libmachine: Creating SSH key...
	I1014 07:33:58.913741    3396 main.go:141] libmachine: Creating Disk image...
	I1014 07:33:58.913746    3396 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:33:58.913927    3396 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2
	I1014 07:33:58.923810    3396 main.go:141] libmachine: STDOUT: 
	I1014 07:33:58.923832    3396 main.go:141] libmachine: STDERR: 
	I1014 07:33:58.923883    3396 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2 +20000M
	I1014 07:33:58.932340    3396 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:33:58.932354    3396 main.go:141] libmachine: STDERR: 
	I1014 07:33:58.932365    3396 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2
	I1014 07:33:58.932371    3396 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:33:58.932380    3396 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:33:58.932426    3396 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:58:65:2d:4c:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2
	I1014 07:33:58.934215    3396 main.go:141] libmachine: STDOUT: 
	I1014 07:33:58.934229    3396 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:33:58.934244    3396 client.go:171] duration metric: took 241.892875ms to LocalClient.Create
	I1014 07:34:00.936390    3396 start.go:128] duration metric: took 2.310790958s to createHost
	I1014 07:34:00.936449    3396 start.go:83] releasing machines lock for "multinode-613000", held for 2.311316875s
	W1014 07:34:00.936852    3396 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-613000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-613000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:34:00.949556    3396 out.go:201] 
	W1014 07:34:00.953673    3396 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:34:00.953697    3396 out.go:270] * 
	* 
	W1014 07:34:00.956148    3396 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:34:00.967522    3396 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-613000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (72.093334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (116.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (133.704042ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-613000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- rollout status deployment/busybox: exit status 1 (61.434375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.895958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:34:01.313575    1497 retry.go:31] will retry after 598.477308ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.669917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:34:02.022107    1497 retry.go:31] will retry after 1.56003427s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.970583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:34:03.692418    1497 retry.go:31] will retry after 1.727036395s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.677209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:34:05.531406    1497 retry.go:31] will retry after 2.727145458s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.768666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:34:08.369695    1497 retry.go:31] will retry after 5.610716154s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.471709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:34:14.093248    1497 retry.go:31] will retry after 5.502135905s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.269166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:34:19.705980    1497 retry.go:31] will retry after 13.005881696s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.188625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:34:32.820193    1497 retry.go:31] will retry after 14.309195164s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.282125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:34:47.239782    1497 retry.go:31] will retry after 19.235711391s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.876084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 07:35:06.586603    1497 retry.go:31] will retry after 51.013986063s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.793042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.737167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.io: exit status 1 (60.933541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.787ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.524166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (33.541334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (116.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (60.890375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (33.772542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-613000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-613000 -v 3 --alsologtostderr: exit status 83 (45.0425ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-613000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-613000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:35:58.115792    3499 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:35:58.116184    3499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:58.116189    3499 out.go:358] Setting ErrFile to fd 2...
	I1014 07:35:58.116191    3499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:58.116367    3499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:35:58.116591    3499 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:35:58.116812    3499 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:35:58.122100    3499 out.go:177] * The control-plane node multinode-613000 host is not running: state=Stopped
	I1014 07:35:58.126120    3499 out.go:177]   To start a cluster, run: "minikube start -p multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-613000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (33.908834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-613000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-613000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (33.962708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-613000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-613000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-613000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (34.835709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-613000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-613000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-613000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-613000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (34.265875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status --output json --alsologtostderr: exit status 7 (34.385875ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-613000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:35:58.349916    3511 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:35:58.350122    3511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:58.350126    3511 out.go:358] Setting ErrFile to fd 2...
	I1014 07:35:58.350128    3511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:58.350259    3511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:35:58.350388    3511 out.go:352] Setting JSON to true
	I1014 07:35:58.350399    3511 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:35:58.350450    3511 notify.go:220] Checking for updates...
	I1014 07:35:58.350612    3511 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:35:58.350621    3511 status.go:174] checking status of multinode-613000 ...
	I1014 07:35:58.350859    3511 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:35:58.350863    3511 status.go:384] host is not running, skipping remaining checks
	I1014 07:35:58.350865    3511 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-613000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (34.068417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 node stop m03: exit status 85 (51.490542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-613000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status: exit status 7 (33.682291ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status --alsologtostderr: exit status 7 (33.989167ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:35:58.504071    3519 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:35:58.504258    3519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:58.504261    3519 out.go:358] Setting ErrFile to fd 2...
	I1014 07:35:58.504263    3519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:58.504395    3519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:35:58.504517    3519 out.go:352] Setting JSON to false
	I1014 07:35:58.504529    3519 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:35:58.504577    3519 notify.go:220] Checking for updates...
	I1014 07:35:58.504745    3519 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:35:58.504753    3519 status.go:174] checking status of multinode-613000 ...
	I1014 07:35:58.505005    3519 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:35:58.505009    3519 status.go:384] host is not running, skipping remaining checks
	I1014 07:35:58.505011    3519 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-613000 status --alsologtostderr": multinode-613000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (34.185083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (43.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.632583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:35:58.572496    3523 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:35:58.572768    3523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:58.572772    3523 out.go:358] Setting ErrFile to fd 2...
	I1014 07:35:58.572774    3523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:58.572912    3523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:35:58.573146    3523 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:35:58.573355    3523 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:35:58.577155    3523 out.go:201] 
	W1014 07:35:58.580117    3523 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1014 07:35:58.580123    3523 out.go:270] * 
	* 
	W1014 07:35:58.581683    3523 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:35:58.584972    3523 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1014 07:35:58.572496    3523 out.go:345] Setting OutFile to fd 1 ...
I1014 07:35:58.572768    3523 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:35:58.572772    3523 out.go:358] Setting ErrFile to fd 2...
I1014 07:35:58.572774    3523 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:35:58.572912    3523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
I1014 07:35:58.573146    3523 mustload.go:65] Loading cluster: multinode-613000
I1014 07:35:58.573355    3523 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:35:58.577155    3523 out.go:201] 
W1014 07:35:58.580117    3523 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1014 07:35:58.580123    3523 out.go:270] * 
* 
W1014 07:35:58.581683    3523 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1014 07:35:58.584972    3523 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-613000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (34.378792ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:35:58.621844    3525 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:35:58.622035    3525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:58.622038    3525 out.go:358] Setting ErrFile to fd 2...
	I1014 07:35:58.622040    3525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:58.622176    3525 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:35:58.622304    3525 out.go:352] Setting JSON to false
	I1014 07:35:58.622315    3525 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:35:58.622357    3525 notify.go:220] Checking for updates...
	I1014 07:35:58.623010    3525 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:35:58.623034    3525 status.go:174] checking status of multinode-613000 ...
	I1014 07:35:58.623504    3525 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:35:58.623510    3525 status.go:384] host is not running, skipping remaining checks
	I1014 07:35:58.623512    3525 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 07:35:58.624511    1497 retry.go:31] will retry after 558.136423ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (77.207917ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:35:59.260017    3527 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:35:59.260265    3527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:59.260269    3527 out.go:358] Setting ErrFile to fd 2...
	I1014 07:35:59.260272    3527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:35:59.260457    3527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:35:59.260607    3527 out.go:352] Setting JSON to false
	I1014 07:35:59.260622    3527 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:35:59.260659    3527 notify.go:220] Checking for updates...
	I1014 07:35:59.260908    3527 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:35:59.260920    3527 status.go:174] checking status of multinode-613000 ...
	I1014 07:35:59.261216    3527 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:35:59.261221    3527 status.go:384] host is not running, skipping remaining checks
	I1014 07:35:59.261223    3527 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 07:35:59.262201    1497 retry.go:31] will retry after 1.85553735s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (80.942375ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:36:01.198760    3531 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:36:01.199022    3531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:01.199026    3531 out.go:358] Setting ErrFile to fd 2...
	I1014 07:36:01.199029    3531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:01.199226    3531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:36:01.199412    3531 out.go:352] Setting JSON to false
	I1014 07:36:01.199429    3531 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:36:01.199476    3531 notify.go:220] Checking for updates...
	I1014 07:36:01.199717    3531 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:36:01.199727    3531 status.go:174] checking status of multinode-613000 ...
	I1014 07:36:01.200028    3531 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:36:01.200033    3531 status.go:384] host is not running, skipping remaining checks
	I1014 07:36:01.200035    3531 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 07:36:01.201113    1497 retry.go:31] will retry after 2.265104454s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (78.050583ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:36:03.544441    3533 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:36:03.544692    3533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:03.544696    3533 out.go:358] Setting ErrFile to fd 2...
	I1014 07:36:03.544699    3533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:03.544843    3533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:36:03.544992    3533 out.go:352] Setting JSON to false
	I1014 07:36:03.545004    3533 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:36:03.545046    3533 notify.go:220] Checking for updates...
	I1014 07:36:03.545235    3533 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:36:03.545251    3533 status.go:174] checking status of multinode-613000 ...
	I1014 07:36:03.545542    3533 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:36:03.545546    3533 status.go:384] host is not running, skipping remaining checks
	I1014 07:36:03.545548    3533 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 07:36:03.546596    1497 retry.go:31] will retry after 3.222472777s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (78.65075ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:36:06.847936    3537 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:36:06.848163    3537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:06.848168    3537 out.go:358] Setting ErrFile to fd 2...
	I1014 07:36:06.848171    3537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:06.848341    3537 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:36:06.848520    3537 out.go:352] Setting JSON to false
	I1014 07:36:06.848535    3537 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:36:06.848585    3537 notify.go:220] Checking for updates...
	I1014 07:36:06.848835    3537 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:36:06.848846    3537 status.go:174] checking status of multinode-613000 ...
	I1014 07:36:06.849166    3537 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:36:06.849171    3537 status.go:384] host is not running, skipping remaining checks
	I1014 07:36:06.849173    3537 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 07:36:06.850208    1497 retry.go:31] will retry after 5.693932351s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (76.496583ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:36:12.620652    3543 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:36:12.620891    3543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:12.620896    3543 out.go:358] Setting ErrFile to fd 2...
	I1014 07:36:12.620899    3543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:12.621101    3543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:36:12.621263    3543 out.go:352] Setting JSON to false
	I1014 07:36:12.621278    3543 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:36:12.621318    3543 notify.go:220] Checking for updates...
	I1014 07:36:12.621576    3543 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:36:12.621586    3543 status.go:174] checking status of multinode-613000 ...
	I1014 07:36:12.621915    3543 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:36:12.621919    3543 status.go:384] host is not running, skipping remaining checks
	I1014 07:36:12.621922    3543 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 07:36:12.622929    1497 retry.go:31] will retry after 4.669985076s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (81.144ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:36:17.372352    3547 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:36:17.372594    3547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:17.372598    3547 out.go:358] Setting ErrFile to fd 2...
	I1014 07:36:17.372602    3547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:17.372754    3547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:36:17.372910    3547 out.go:352] Setting JSON to false
	I1014 07:36:17.372923    3547 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:36:17.372962    3547 notify.go:220] Checking for updates...
	I1014 07:36:17.373167    3547 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:36:17.373177    3547 status.go:174] checking status of multinode-613000 ...
	I1014 07:36:17.373468    3547 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:36:17.373472    3547 status.go:384] host is not running, skipping remaining checks
	I1014 07:36:17.373474    3547 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 07:36:17.374459    1497 retry.go:31] will retry after 10.51091907s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (79.4665ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:36:27.964899    3556 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:36:27.965131    3556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:27.965138    3556 out.go:358] Setting ErrFile to fd 2...
	I1014 07:36:27.965142    3556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:27.965340    3556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:36:27.965505    3556 out.go:352] Setting JSON to false
	I1014 07:36:27.965520    3556 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:36:27.965559    3556 notify.go:220] Checking for updates...
	I1014 07:36:27.965784    3556 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:36:27.965795    3556 status.go:174] checking status of multinode-613000 ...
	I1014 07:36:27.966129    3556 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:36:27.966134    3556 status.go:384] host is not running, skipping remaining checks
	I1014 07:36:27.966136    3556 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 07:36:27.967107    1497 retry.go:31] will retry after 14.318331493s: exit status 7
E1014 07:36:40.230496    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (78.127667ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:36:42.363629    3562 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:36:42.363876    3562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:42.363880    3562 out.go:358] Setting ErrFile to fd 2...
	I1014 07:36:42.363883    3562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:42.364042    3562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:36:42.364182    3562 out.go:352] Setting JSON to false
	I1014 07:36:42.364196    3562 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:36:42.364233    3562 notify.go:220] Checking for updates...
	I1014 07:36:42.364447    3562 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:36:42.364457    3562 status.go:174] checking status of multinode-613000 ...
	I1014 07:36:42.364756    3562 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:36:42.364761    3562 status.go:384] host is not running, skipping remaining checks
	I1014 07:36:42.364764    3562 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-613000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (36.509583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (43.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-613000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-613000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-613000: (3.427778583s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-613000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-613000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.235763833s)

                                                
                                                
-- stdout --
	* [multinode-613000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	* Restarting existing qemu2 VM for "multinode-613000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-613000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:36:45.937006    3588 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:36:45.937234    3588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:45.937239    3588 out.go:358] Setting ErrFile to fd 2...
	I1014 07:36:45.937242    3588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:45.937416    3588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:36:45.939006    3588 out.go:352] Setting JSON to false
	I1014 07:36:45.960173    3588 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3975,"bootTime":1728912630,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:36:45.960247    3588 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:36:45.964006    3588 out.go:177] * [multinode-613000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:36:45.972078    3588 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:36:45.972147    3588 notify.go:220] Checking for updates...
	I1014 07:36:45.978967    3588 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:36:45.981958    3588 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:36:45.984915    3588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:36:45.987970    3588 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:36:45.990984    3588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:36:45.994314    3588 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:36:45.994369    3588 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:36:45.998969    3588 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:36:46.005926    3588 start.go:297] selected driver: qemu2
	I1014 07:36:46.005933    3588 start.go:901] validating driver "qemu2" against &{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:36:46.005983    3588 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:36:46.008587    3588 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:36:46.008617    3588 cni.go:84] Creating CNI manager for ""
	I1014 07:36:46.008640    3588 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:36:46.008685    3588 start.go:340] cluster config:
	{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-613000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:36:46.013045    3588 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:36:46.021801    3588 out.go:177] * Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	I1014 07:36:46.025915    3588 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:36:46.025928    3588 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:36:46.025938    3588 cache.go:56] Caching tarball of preloaded images
	I1014 07:36:46.026008    3588 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:36:46.026013    3588 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:36:46.026075    3588 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/multinode-613000/config.json ...
	I1014 07:36:46.026528    3588 start.go:360] acquireMachinesLock for multinode-613000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:36:46.026583    3588 start.go:364] duration metric: took 46.125µs to acquireMachinesLock for "multinode-613000"
	I1014 07:36:46.026594    3588 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:36:46.026597    3588 fix.go:54] fixHost starting: 
	I1014 07:36:46.026737    3588 fix.go:112] recreateIfNeeded on multinode-613000: state=Stopped err=<nil>
	W1014 07:36:46.026745    3588 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:36:46.034921    3588 out.go:177] * Restarting existing qemu2 VM for "multinode-613000" ...
	I1014 07:36:46.038938    3588 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:36:46.038973    3588 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:58:65:2d:4c:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2
	I1014 07:36:46.041080    3588 main.go:141] libmachine: STDOUT: 
	I1014 07:36:46.041100    3588 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:36:46.041133    3588 fix.go:56] duration metric: took 14.5345ms for fixHost
	I1014 07:36:46.041138    3588 start.go:83] releasing machines lock for "multinode-613000", held for 14.551458ms
	W1014 07:36:46.041144    3588 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:36:46.041186    3588 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:36:46.041191    3588 start.go:729] Will try again in 5 seconds ...
	I1014 07:36:51.041442    3588 start.go:360] acquireMachinesLock for multinode-613000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:36:51.041921    3588 start.go:364] duration metric: took 363.209µs to acquireMachinesLock for "multinode-613000"
	I1014 07:36:51.042049    3588 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:36:51.042069    3588 fix.go:54] fixHost starting: 
	I1014 07:36:51.042825    3588 fix.go:112] recreateIfNeeded on multinode-613000: state=Stopped err=<nil>
	W1014 07:36:51.042855    3588 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:36:51.047392    3588 out.go:177] * Restarting existing qemu2 VM for "multinode-613000" ...
	I1014 07:36:51.055329    3588 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:36:51.055738    3588 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:58:65:2d:4c:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2
	I1014 07:36:51.065745    3588 main.go:141] libmachine: STDOUT: 
	I1014 07:36:51.065807    3588 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:36:51.065891    3588 fix.go:56] duration metric: took 23.82175ms for fixHost
	I1014 07:36:51.065914    3588 start.go:83] releasing machines lock for "multinode-613000", held for 23.967583ms
	W1014 07:36:51.066107    3588 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-613000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-613000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:36:51.072330    3588 out.go:201] 
	W1014 07:36:51.076397    3588 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:36:51.076434    3588 out.go:270] * 
	* 
	W1014 07:36:51.079014    3588 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:36:51.086329    3588 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-613000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-613000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (35.807666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 node delete m03: exit status 83 (44.67325ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-613000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-613000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-613000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status --alsologtostderr: exit status 7 (33.937042ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:36:51.286962    3604 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:36:51.287145    3604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:51.287148    3604 out.go:358] Setting ErrFile to fd 2...
	I1014 07:36:51.287151    3604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:51.287304    3604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:36:51.287438    3604 out.go:352] Setting JSON to false
	I1014 07:36:51.287456    3604 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:36:51.287496    3604 notify.go:220] Checking for updates...
	I1014 07:36:51.287659    3604 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:36:51.287667    3604 status.go:174] checking status of multinode-613000 ...
	I1014 07:36:51.288158    3604 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:36:51.288165    3604 status.go:384] host is not running, skipping remaining checks
	I1014 07:36:51.288167    3604 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-613000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (33.728416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-613000 stop: (2.863543333s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status: exit status 7 (68.552125ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-613000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-613000 status --alsologtostderr: exit status 7 (35.551291ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:36:54.289255    3630 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:36:54.289461    3630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:54.289469    3630 out.go:358] Setting ErrFile to fd 2...
	I1014 07:36:54.289471    3630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:54.289595    3630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:36:54.289718    3630 out.go:352] Setting JSON to false
	I1014 07:36:54.289733    3630 mustload.go:65] Loading cluster: multinode-613000
	I1014 07:36:54.289765    3630 notify.go:220] Checking for updates...
	I1014 07:36:54.289916    3630 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:36:54.289925    3630 status.go:174] checking status of multinode-613000 ...
	I1014 07:36:54.290153    3630 status.go:371] multinode-613000 host status = "Stopped" (err=<nil>)
	I1014 07:36:54.290156    3630 status.go:384] host is not running, skipping remaining checks
	I1014 07:36:54.290158    3630 status.go:176] multinode-613000 status: &{Name:multinode-613000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-613000 status --alsologtostderr": multinode-613000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-613000 status --alsologtostderr": multinode-613000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (34.593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-613000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
E1014 07:36:57.131404    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-613000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.1921455s)

                                                
                                                
-- stdout --
	* [multinode-613000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	* Restarting existing qemu2 VM for "multinode-613000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-613000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:36:54.357135    3634 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:36:54.357286    3634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:54.357290    3634 out.go:358] Setting ErrFile to fd 2...
	I1014 07:36:54.357292    3634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:36:54.357417    3634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:36:54.358496    3634 out.go:352] Setting JSON to false
	I1014 07:36:54.377090    3634 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3984,"bootTime":1728912630,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:36:54.377156    3634 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:36:54.381602    3634 out.go:177] * [multinode-613000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:36:54.389288    3634 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:36:54.389348    3634 notify.go:220] Checking for updates...
	I1014 07:36:54.395267    3634 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:36:54.398296    3634 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:36:54.399649    3634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:36:54.402282    3634 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:36:54.405257    3634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:36:54.408607    3634 config.go:182] Loaded profile config "multinode-613000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:36:54.408883    3634 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:36:54.413228    3634 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:36:54.420283    3634 start.go:297] selected driver: qemu2
	I1014 07:36:54.420290    3634 start.go:901] validating driver "qemu2" against &{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:36:54.420338    3634 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:36:54.422881    3634 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:36:54.422903    3634 cni.go:84] Creating CNI manager for ""
	I1014 07:36:54.422927    3634 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:36:54.422966    3634 start.go:340] cluster config:
	{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-613000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:36:54.427380    3634 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:36:54.435287    3634 out.go:177] * Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	I1014 07:36:54.439212    3634 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:36:54.439225    3634 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:36:54.439233    3634 cache.go:56] Caching tarball of preloaded images
	I1014 07:36:54.439279    3634 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:36:54.439287    3634 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:36:54.439348    3634 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/multinode-613000/config.json ...
	I1014 07:36:54.439816    3634 start.go:360] acquireMachinesLock for multinode-613000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:36:54.439847    3634 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "multinode-613000"
	I1014 07:36:54.439858    3634 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:36:54.439863    3634 fix.go:54] fixHost starting: 
	I1014 07:36:54.439995    3634 fix.go:112] recreateIfNeeded on multinode-613000: state=Stopped err=<nil>
	W1014 07:36:54.440003    3634 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:36:54.444337    3634 out.go:177] * Restarting existing qemu2 VM for "multinode-613000" ...
	I1014 07:36:54.452236    3634 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:36:54.452273    3634 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:58:65:2d:4c:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2
	I1014 07:36:54.454500    3634 main.go:141] libmachine: STDOUT: 
	I1014 07:36:54.454516    3634 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:36:54.454544    3634 fix.go:56] duration metric: took 14.679666ms for fixHost
	I1014 07:36:54.454548    3634 start.go:83] releasing machines lock for "multinode-613000", held for 14.696959ms
	W1014 07:36:54.454554    3634 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:36:54.454602    3634 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:36:54.454607    3634 start.go:729] Will try again in 5 seconds ...
	I1014 07:36:59.456678    3634 start.go:360] acquireMachinesLock for multinode-613000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:36:59.457115    3634 start.go:364] duration metric: took 347.334µs to acquireMachinesLock for "multinode-613000"
	I1014 07:36:59.457273    3634 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:36:59.457293    3634 fix.go:54] fixHost starting: 
	I1014 07:36:59.457983    3634 fix.go:112] recreateIfNeeded on multinode-613000: state=Stopped err=<nil>
	W1014 07:36:59.458008    3634 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:36:59.466604    3634 out.go:177] * Restarting existing qemu2 VM for "multinode-613000" ...
	I1014 07:36:59.470667    3634 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:36:59.470845    3634 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:58:65:2d:4c:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/multinode-613000/disk.qcow2
	I1014 07:36:59.481415    3634 main.go:141] libmachine: STDOUT: 
	I1014 07:36:59.481499    3634 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:36:59.481574    3634 fix.go:56] duration metric: took 24.28075ms for fixHost
	I1014 07:36:59.481592    3634 start.go:83] releasing machines lock for "multinode-613000", held for 24.449ms
	W1014 07:36:59.481795    3634 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-613000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-613000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:36:59.489602    3634 out.go:201] 
	W1014 07:36:59.493713    3634 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:36:59.493745    3634 out.go:270] * 
	* 
	W1014 07:36:59.496295    3634 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:36:59.504603    3634 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-613000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (73.329333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-613000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-613000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-613000-m01 --driver=qemu2 : exit status 80 (9.961292208s)

                                                
                                                
-- stdout --
	* [multinode-613000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-613000-m01" primary control-plane node in "multinode-613000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-613000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-613000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-613000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-613000-m02 --driver=qemu2 : exit status 80 (9.873948042s)

                                                
                                                
-- stdout --
	* [multinode-613000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-613000-m02" primary control-plane node in "multinode-613000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-613000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-613000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-613000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-613000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-613000: exit status 83 (84.121792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-613000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-613000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-613000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (33.965959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.07s)

                                                
                                    
x
+
TestPreload (9.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-604000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-604000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.810909s)

                                                
                                                
-- stdout --
	* [test-preload-604000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-604000" primary control-plane node in "test-preload-604000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-604000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:37:19.811306    3699 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:37:19.811475    3699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:37:19.811479    3699 out.go:358] Setting ErrFile to fd 2...
	I1014 07:37:19.811481    3699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:37:19.811602    3699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:37:19.812797    3699 out.go:352] Setting JSON to false
	I1014 07:37:19.830259    3699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4009,"bootTime":1728912630,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:37:19.830341    3699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:37:19.835614    3699 out.go:177] * [test-preload-604000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:37:19.843606    3699 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:37:19.843649    3699 notify.go:220] Checking for updates...
	I1014 07:37:19.850526    3699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:37:19.853601    3699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:37:19.856625    3699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:37:19.859569    3699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:37:19.862607    3699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:37:19.865871    3699 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:37:19.865918    3699 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:37:19.870573    3699 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:37:19.876506    3699 start.go:297] selected driver: qemu2
	I1014 07:37:19.876512    3699 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:37:19.876518    3699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:37:19.879003    3699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:37:19.882589    3699 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:37:19.885660    3699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:37:19.885678    3699 cni.go:84] Creating CNI manager for ""
	I1014 07:37:19.885699    3699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:37:19.885703    3699 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:37:19.885736    3699 start.go:340] cluster config:
	{Name:test-preload-604000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:37:19.890205    3699 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:37:19.898586    3699 out.go:177] * Starting "test-preload-604000" primary control-plane node in "test-preload-604000" cluster
	I1014 07:37:19.902594    3699 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1014 07:37:19.902687    3699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/test-preload-604000/config.json ...
	I1014 07:37:19.902713    3699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/test-preload-604000/config.json: {Name:mk5cbd9704c4f1a01d30cbda45c81efd2cad0bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:37:19.902708    3699 cache.go:107] acquiring lock: {Name:mkfbee7ed24a5bff77ccc82c9584e51a8ba123a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:37:19.902708    3699 cache.go:107] acquiring lock: {Name:mka2dd3219f208c64b717849208e04c48d02cadd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:37:19.902725    3699 cache.go:107] acquiring lock: {Name:mk60b9d836009d28838d006c7c4a37a02e4f6f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:37:19.902865    3699 cache.go:107] acquiring lock: {Name:mka888dcb8c324ca0f187ea1b5558e0ec8007aa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:37:19.902937    3699 cache.go:107] acquiring lock: {Name:mk827fdeed38d43f623c5c0d3de9db4886dad68c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:37:19.902945    3699 cache.go:107] acquiring lock: {Name:mk0a711acc4984814e1d41239b9b6cef93caebe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:37:19.902973    3699 cache.go:107] acquiring lock: {Name:mk91ba5802a8a200343675bcdad57e43fdb3cf99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:37:19.902949    3699 cache.go:107] acquiring lock: {Name:mkde45df3b5842b3bfec4d29f94f8a3d40a1deb7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:37:19.903303    3699 start.go:360] acquireMachinesLock for test-preload-604000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:37:19.903394    3699 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1014 07:37:19.903507    3699 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1014 07:37:19.903537    3699 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1014 07:37:19.903552    3699 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:37:19.903597    3699 start.go:364] duration metric: took 278.667µs to acquireMachinesLock for "test-preload-604000"
	I1014 07:37:19.903624    3699 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1014 07:37:19.903642    3699 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1014 07:37:19.903594    3699 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:37:19.903633    3699 start.go:93] Provisioning new machine with config: &{Name:test-preload-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:37:19.903672    3699 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:37:19.903614    3699 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1014 07:37:19.907555    3699 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:37:19.917336    3699 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:37:19.917469    3699 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1014 07:37:19.917887    3699 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1014 07:37:19.919582    3699 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1014 07:37:19.920388    3699 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:37:19.920402    3699 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1014 07:37:19.920490    3699 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1014 07:37:19.920519    3699 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1014 07:37:19.926036    3699 start.go:159] libmachine.API.Create for "test-preload-604000" (driver="qemu2")
	I1014 07:37:19.926054    3699 client.go:168] LocalClient.Create starting
	I1014 07:37:19.926129    3699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:37:19.926166    3699 main.go:141] libmachine: Decoding PEM data...
	I1014 07:37:19.926175    3699 main.go:141] libmachine: Parsing certificate...
	I1014 07:37:19.926216    3699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:37:19.926245    3699 main.go:141] libmachine: Decoding PEM data...
	I1014 07:37:19.926254    3699 main.go:141] libmachine: Parsing certificate...
	I1014 07:37:19.926585    3699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:37:20.084554    3699 main.go:141] libmachine: Creating SSH key...
	I1014 07:37:20.144078    3699 main.go:141] libmachine: Creating Disk image...
	I1014 07:37:20.144094    3699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:37:20.144282    3699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2
	I1014 07:37:20.153847    3699 main.go:141] libmachine: STDOUT: 
	I1014 07:37:20.153868    3699 main.go:141] libmachine: STDERR: 
	I1014 07:37:20.153923    3699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2 +20000M
	I1014 07:37:20.163100    3699 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:37:20.163122    3699 main.go:141] libmachine: STDERR: 
	I1014 07:37:20.163139    3699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2
	I1014 07:37:20.163145    3699 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:37:20.163157    3699 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:37:20.163191    3699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:2e:6f:71:78:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2
	I1014 07:37:20.165467    3699 main.go:141] libmachine: STDOUT: 
	I1014 07:37:20.165483    3699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:37:20.165503    3699 client.go:171] duration metric: took 239.4485ms to LocalClient.Create
	I1014 07:37:20.562219    3699 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1014 07:37:20.571437    3699 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1014 07:37:20.599599    3699 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1014 07:37:20.625171    3699 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1014 07:37:20.625189    3699 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1014 07:37:20.736446    3699 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1014 07:37:20.736466    3699 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 833.671042ms
	I1014 07:37:20.736482    3699 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I1014 07:37:20.774371    3699 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1014 07:37:20.777825    3699 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1014 07:37:20.817814    3699 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W1014 07:37:20.962718    3699 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1014 07:37:20.962813    3699 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1014 07:37:21.586606    3699 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1014 07:37:21.586680    3699 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.684015958s
	I1014 07:37:21.586710    3699 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1014 07:37:22.165809    3699 start.go:128] duration metric: took 2.262155709s to createHost
	I1014 07:37:22.165893    3699 start.go:83] releasing machines lock for "test-preload-604000", held for 2.262317083s
	W1014 07:37:22.165957    3699 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:37:22.183586    3699 out.go:177] * Deleting "test-preload-604000" in qemu2 ...
	W1014 07:37:22.211481    3699 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:37:22.211517    3699 start.go:729] Will try again in 5 seconds ...
	I1014 07:37:23.079017    3699 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1014 07:37:23.079074    3699 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.176319583s
	I1014 07:37:23.079099    3699 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1014 07:37:23.503376    3699 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1014 07:37:23.503425    3699 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.600605041s
	I1014 07:37:23.503453    3699 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1014 07:37:24.695362    3699 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1014 07:37:24.695418    3699 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.792805292s
	I1014 07:37:24.695449    3699 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1014 07:37:25.420304    3699 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1014 07:37:25.420362    3699 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.517547s
	I1014 07:37:25.420409    3699 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1014 07:37:26.095065    3699 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1014 07:37:26.095116    3699 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.192556209s
	I1014 07:37:26.095175    3699 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1014 07:37:27.211549    3699 start.go:360] acquireMachinesLock for test-preload-604000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:37:27.212057    3699 start.go:364] duration metric: took 422.625µs to acquireMachinesLock for "test-preload-604000"
	I1014 07:37:27.212198    3699 start.go:93] Provisioning new machine with config: &{Name:test-preload-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:37:27.212430    3699 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:37:27.229317    3699 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:37:27.279401    3699 start.go:159] libmachine.API.Create for "test-preload-604000" (driver="qemu2")
	I1014 07:37:27.279469    3699 client.go:168] LocalClient.Create starting
	I1014 07:37:27.279610    3699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:37:27.279691    3699 main.go:141] libmachine: Decoding PEM data...
	I1014 07:37:27.279729    3699 main.go:141] libmachine: Parsing certificate...
	I1014 07:37:27.279805    3699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:37:27.279863    3699 main.go:141] libmachine: Decoding PEM data...
	I1014 07:37:27.279877    3699 main.go:141] libmachine: Parsing certificate...
	I1014 07:37:27.280466    3699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:37:27.447339    3699 main.go:141] libmachine: Creating SSH key...
	I1014 07:37:27.518159    3699 main.go:141] libmachine: Creating Disk image...
	I1014 07:37:27.518169    3699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:37:27.518361    3699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2
	I1014 07:37:27.528363    3699 main.go:141] libmachine: STDOUT: 
	I1014 07:37:27.528379    3699 main.go:141] libmachine: STDERR: 
	I1014 07:37:27.528443    3699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2 +20000M
	I1014 07:37:27.537162    3699 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:37:27.537178    3699 main.go:141] libmachine: STDERR: 
	I1014 07:37:27.537189    3699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2
	I1014 07:37:27.537194    3699 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:37:27.537205    3699 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:37:27.537245    3699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:65:b3:5b:75:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/test-preload-604000/disk.qcow2
	I1014 07:37:27.539227    3699 main.go:141] libmachine: STDOUT: 
	I1014 07:37:27.539240    3699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:37:27.539253    3699 client.go:171] duration metric: took 259.783042ms to LocalClient.Create
	I1014 07:37:28.721475    3699 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1014 07:37:28.721546    3699 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.818925917s
	I1014 07:37:28.721576    3699 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1014 07:37:28.721629    3699 cache.go:87] Successfully saved all images to host disk.
	I1014 07:37:29.541442    3699 start.go:128] duration metric: took 2.328996083s to createHost
	I1014 07:37:29.541526    3699 start.go:83] releasing machines lock for "test-preload-604000", held for 2.329496125s
	W1014 07:37:29.541811    3699 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-604000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-604000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:37:29.557406    3699 out.go:201] 
	W1014 07:37:29.562531    3699 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:37:29.562562    3699 out.go:270] * 
	* 
	W1014 07:37:29.565107    3699 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:37:29.574413    3699 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-604000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-14 07:37:29.592485 -0700 PDT m=+3571.928908335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-604000 -n test-preload-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-604000 -n test-preload-604000: exit status 7 (73.382042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-604000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-604000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-604000
--- FAIL: TestPreload (9.97s)

                                                
                                    
x
+
TestScheduledStopUnix (9.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-849000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-849000 --memory=2048 --driver=qemu2 : exit status 80 (9.8220685s)

                                                
                                                
-- stdout --
	* [scheduled-stop-849000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-849000" primary control-plane node in "scheduled-stop-849000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-849000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-849000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-849000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-849000" primary control-plane node in "scheduled-stop-849000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-849000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-849000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-14 07:37:39.569322 -0700 PDT m=+3581.905969126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-849000 -n scheduled-stop-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-849000 -n scheduled-stop-849000: exit status 7 (78.018625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-849000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-849000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-849000
--- FAIL: TestScheduledStopUnix (9.98s)

                                                
                                    
x
+
TestSkaffold (12.75s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3606943682 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3606943682 version: (1.014930166s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-362000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-362000 --memory=2600 --driver=qemu2 : exit status 80 (9.864152209s)

                                                
                                                
-- stdout --
	* [skaffold-362000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-362000" primary control-plane node in "skaffold-362000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-362000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-362000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-362000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-362000" primary control-plane node in "skaffold-362000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-362000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-362000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-14 07:37:52.323549 -0700 PDT m=+3594.660483085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-362000 -n skaffold-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-362000 -n skaffold-362000: exit status 7 (65.702417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-362000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-362000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-362000
--- FAIL: TestSkaffold (12.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (604.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1874456245 start -p running-upgrade-116000 --memory=2200 --vm-driver=qemu2 
E1014 07:38:46.496039    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1874456245 start -p running-upgrade-116000 --memory=2200 --vm-driver=qemu2 : (54.390709791s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-116000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-116000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m34.814566709s)

                                                
                                                
-- stdout --
	* [running-upgrade-116000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-116000" primary control-plane node in "running-upgrade-116000" cluster
	* Updating the running qemu2 "running-upgrade-116000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:39:10.535598    4051 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:39:10.536160    4051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:39:10.536164    4051 out.go:358] Setting ErrFile to fd 2...
	I1014 07:39:10.536167    4051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:39:10.536292    4051 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:39:10.537776    4051 out.go:352] Setting JSON to false
	I1014 07:39:10.557654    4051 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4120,"bootTime":1728912630,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:39:10.557736    4051 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:39:10.562345    4051 out.go:177] * [running-upgrade-116000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:39:10.570213    4051 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:39:10.570247    4051 notify.go:220] Checking for updates...
	I1014 07:39:10.578183    4051 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:39:10.582193    4051 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:39:10.583412    4051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:39:10.586187    4051 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:39:10.589195    4051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:39:10.592475    4051 config.go:182] Loaded profile config "running-upgrade-116000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1014 07:39:10.596245    4051 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 07:39:10.599203    4051 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:39:10.603177    4051 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:39:10.610184    4051 start.go:297] selected driver: qemu2
	I1014 07:39:10.610201    4051 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-116000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61423 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-116000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1014 07:39:10.610245    4051 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:39:10.613099    4051 cni.go:84] Creating CNI manager for ""
	I1014 07:39:10.613293    4051 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:39:10.613487    4051 start.go:340] cluster config:
	{Name:running-upgrade-116000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61423 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-116000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1014 07:39:10.613720    4051 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:39:10.622179    4051 out.go:177] * Starting "running-upgrade-116000" primary control-plane node in "running-upgrade-116000" cluster
	I1014 07:39:10.626048    4051 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1014 07:39:10.626073    4051 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1014 07:39:10.626079    4051 cache.go:56] Caching tarball of preloaded images
	I1014 07:39:10.626170    4051 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:39:10.626176    4051 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1014 07:39:10.626235    4051 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/config.json ...
	I1014 07:39:10.626652    4051 start.go:360] acquireMachinesLock for running-upgrade-116000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:39:12.926011    4051 start.go:364] duration metric: took 2.299395666s to acquireMachinesLock for "running-upgrade-116000"
	I1014 07:39:12.926076    4051 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:39:12.926081    4051 fix.go:54] fixHost starting: 
	I1014 07:39:12.926822    4051 fix.go:112] recreateIfNeeded on running-upgrade-116000: state=Running err=<nil>
	W1014 07:39:12.926834    4051 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:39:12.934591    4051 out.go:177] * Updating the running qemu2 "running-upgrade-116000" VM ...
	I1014 07:39:12.937627    4051 machine.go:93] provisionDockerMachine start ...
	I1014 07:39:12.937701    4051 main.go:141] libmachine: Using SSH client type: native
	I1014 07:39:12.937852    4051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b22480] 0x100b24cc0 <nil>  [] 0s} localhost 61391 <nil> <nil>}
	I1014 07:39:12.937857    4051 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:39:13.005080    4051 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-116000
	
	I1014 07:39:13.005094    4051 buildroot.go:166] provisioning hostname "running-upgrade-116000"
	I1014 07:39:13.005163    4051 main.go:141] libmachine: Using SSH client type: native
	I1014 07:39:13.005275    4051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b22480] 0x100b24cc0 <nil>  [] 0s} localhost 61391 <nil> <nil>}
	I1014 07:39:13.005282    4051 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-116000 && echo "running-upgrade-116000" | sudo tee /etc/hostname
	I1014 07:39:13.076153    4051 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-116000
	
	I1014 07:39:13.076248    4051 main.go:141] libmachine: Using SSH client type: native
	I1014 07:39:13.076374    4051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b22480] 0x100b24cc0 <nil>  [] 0s} localhost 61391 <nil> <nil>}
	I1014 07:39:13.076384    4051 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-116000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-116000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-116000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:39:13.143163    4051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:39:13.143180    4051 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19790-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19790-979/.minikube}
	I1014 07:39:13.143188    4051 buildroot.go:174] setting up certificates
	I1014 07:39:13.143193    4051 provision.go:84] configureAuth start
	I1014 07:39:13.143215    4051 provision.go:143] copyHostCerts
	I1014 07:39:13.143320    4051 exec_runner.go:144] found /Users/jenkins/minikube-integration/19790-979/.minikube/ca.pem, removing ...
	I1014 07:39:13.144127    4051 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19790-979/.minikube/ca.pem
	I1014 07:39:13.144445    4051 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19790-979/.minikube/ca.pem (1078 bytes)
	I1014 07:39:13.144663    4051 exec_runner.go:144] found /Users/jenkins/minikube-integration/19790-979/.minikube/cert.pem, removing ...
	I1014 07:39:13.144668    4051 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19790-979/.minikube/cert.pem
	I1014 07:39:13.144722    4051 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19790-979/.minikube/cert.pem (1123 bytes)
	I1014 07:39:13.144836    4051 exec_runner.go:144] found /Users/jenkins/minikube-integration/19790-979/.minikube/key.pem, removing ...
	I1014 07:39:13.144841    4051 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19790-979/.minikube/key.pem
	I1014 07:39:13.144880    4051 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19790-979/.minikube/key.pem (1675 bytes)
	I1014 07:39:13.145008    4051 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19790-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-116000 san=[127.0.0.1 localhost minikube running-upgrade-116000]
	I1014 07:39:13.498494    4051 provision.go:177] copyRemoteCerts
	I1014 07:39:13.498590    4051 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:39:13.498604    4051 sshutil.go:53] new ssh client: &{IP:localhost Port:61391 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/running-upgrade-116000/id_rsa Username:docker}
	I1014 07:39:13.534733    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 07:39:13.542389    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 07:39:13.550107    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:39:13.558129    4051 provision.go:87] duration metric: took 414.929125ms to configureAuth
	I1014 07:39:13.558143    4051 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:39:13.558277    4051 config.go:182] Loaded profile config "running-upgrade-116000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1014 07:39:13.558331    4051 main.go:141] libmachine: Using SSH client type: native
	I1014 07:39:13.558434    4051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b22480] 0x100b24cc0 <nil>  [] 0s} localhost 61391 <nil> <nil>}
	I1014 07:39:13.558440    4051 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:39:13.625309    4051 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:39:13.625328    4051 buildroot.go:70] root file system type: tmpfs
	I1014 07:39:13.625383    4051 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:39:13.625470    4051 main.go:141] libmachine: Using SSH client type: native
	I1014 07:39:13.625583    4051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b22480] 0x100b24cc0 <nil>  [] 0s} localhost 61391 <nil> <nil>}
	I1014 07:39:13.625617    4051 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:39:13.696919    4051 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:39:13.696992    4051 main.go:141] libmachine: Using SSH client type: native
	I1014 07:39:13.697114    4051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b22480] 0x100b24cc0 <nil>  [] 0s} localhost 61391 <nil> <nil>}
	I1014 07:39:13.697124    4051 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:39:13.769493    4051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:39:13.769506    4051 machine.go:96] duration metric: took 831.891834ms to provisionDockerMachine
	I1014 07:39:13.769513    4051 start.go:293] postStartSetup for "running-upgrade-116000" (driver="qemu2")
	I1014 07:39:13.769520    4051 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:39:13.769599    4051 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:39:13.769610    4051 sshutil.go:53] new ssh client: &{IP:localhost Port:61391 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/running-upgrade-116000/id_rsa Username:docker}
	I1014 07:39:13.806519    4051 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:39:13.807890    4051 info.go:137] Remote host: Buildroot 2021.02.12
	I1014 07:39:13.807899    4051 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19790-979/.minikube/addons for local assets ...
	I1014 07:39:13.807985    4051 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19790-979/.minikube/files for local assets ...
	I1014 07:39:13.808098    4051 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem -> 14972.pem in /etc/ssl/certs
	I1014 07:39:13.808204    4051 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:39:13.811417    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem --> /etc/ssl/certs/14972.pem (1708 bytes)
	I1014 07:39:13.819649    4051 start.go:296] duration metric: took 50.130084ms for postStartSetup
	I1014 07:39:13.819670    4051 fix.go:56] duration metric: took 893.609917ms for fixHost
	I1014 07:39:13.819742    4051 main.go:141] libmachine: Using SSH client type: native
	I1014 07:39:13.819868    4051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b22480] 0x100b24cc0 <nil>  [] 0s} localhost 61391 <nil> <nil>}
	I1014 07:39:13.819874    4051 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:39:13.886763    4051 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728916753.413926390
	
	I1014 07:39:13.886776    4051 fix.go:216] guest clock: 1728916753.413926390
	I1014 07:39:13.886780    4051 fix.go:229] Guest: 2024-10-14 07:39:13.41392639 -0700 PDT Remote: 2024-10-14 07:39:13.819672 -0700 PDT m=+3.379015126 (delta=-405.74561ms)
	I1014 07:39:13.886793    4051 fix.go:200] guest clock delta is within tolerance: -405.74561ms
	I1014 07:39:13.886796    4051 start.go:83] releasing machines lock for "running-upgrade-116000", held for 960.784375ms
	I1014 07:39:13.886885    4051 ssh_runner.go:195] Run: cat /version.json
	I1014 07:39:13.886895    4051 sshutil.go:53] new ssh client: &{IP:localhost Port:61391 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/running-upgrade-116000/id_rsa Username:docker}
	I1014 07:39:13.886966    4051 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 07:39:13.886994    4051 sshutil.go:53] new ssh client: &{IP:localhost Port:61391 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/running-upgrade-116000/id_rsa Username:docker}
	W1014 07:39:13.887594    4051 sshutil.go:64] dial failure (will retry): dial tcp [::1]:61391: connect: connection refused
	I1014 07:39:13.887614    4051 retry.go:31] will retry after 199.59535ms: dial tcp [::1]:61391: connect: connection refused
	W1014 07:39:14.123384    4051 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1014 07:39:14.123480    4051 ssh_runner.go:195] Run: systemctl --version
	I1014 07:39:14.125606    4051 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:39:14.127658    4051 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:39:14.127720    4051 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1014 07:39:14.131328    4051 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1014 07:39:14.135966    4051 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:39:14.135980    4051 start.go:495] detecting cgroup driver to use...
	I1014 07:39:14.136124    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:39:14.142596    4051 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1014 07:39:14.146188    4051 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:39:14.149460    4051 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:39:14.149518    4051 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:39:14.153181    4051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:39:14.156919    4051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:39:14.160849    4051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:39:14.165383    4051 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:39:14.168904    4051 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:39:14.172281    4051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:39:14.176013    4051 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:39:14.179526    4051 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:39:14.182542    4051 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:39:14.187350    4051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:39:14.284741    4051 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:39:14.300105    4051 start.go:495] detecting cgroup driver to use...
	I1014 07:39:14.300216    4051 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:39:14.306455    4051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:39:14.315982    4051 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:39:14.323299    4051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:39:14.329017    4051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:39:14.334964    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:39:14.341226    4051 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:39:14.342740    4051 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:39:14.345438    4051 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1014 07:39:14.351443    4051 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:39:14.458126    4051 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:39:14.556276    4051 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:39:14.556342    4051 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:39:14.562489    4051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:39:14.668572    4051 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:39:27.398224    4051 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.729919334s)
	I1014 07:39:27.398315    4051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:39:27.403406    4051 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1014 07:39:27.414581    4051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:39:27.420999    4051 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:39:27.498643    4051 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:39:27.580660    4051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:39:27.662550    4051 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:39:27.668821    4051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:39:27.674505    4051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:39:27.761896    4051 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:39:27.800110    4051 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:39:27.800222    4051 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:39:27.803401    4051 start.go:563] Will wait 60s for crictl version
	I1014 07:39:27.803473    4051 ssh_runner.go:195] Run: which crictl
	I1014 07:39:27.805007    4051 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:39:27.817737    4051 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1014 07:39:27.817822    4051 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:39:27.830293    4051 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:39:27.847094    4051 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1014 07:39:27.847318    4051 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1014 07:39:27.849049    4051 kubeadm.go:883] updating cluster {Name:running-upgrade-116000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61423 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-116000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1014 07:39:27.849099    4051 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1014 07:39:27.849158    4051 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:39:27.859685    4051 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:39:27.859695    4051 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1014 07:39:27.859752    4051 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:39:27.863526    4051 ssh_runner.go:195] Run: which lz4
	I1014 07:39:27.864956    4051 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:39:27.866279    4051 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:39:27.866291    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1014 07:39:28.830879    4051 docker.go:653] duration metric: took 966.001541ms to copy over tarball
	I1014 07:39:28.830952    4051 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:39:30.162748    4051 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.3318125s)
	I1014 07:39:30.162761    4051 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:39:30.178955    4051 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:39:30.182075    4051 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1014 07:39:30.187295    4051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:39:30.265122    4051 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:39:30.463403    4051 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:39:30.478670    4051 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:39:30.478679    4051 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1014 07:39:30.478684    4051 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 07:39:30.484270    4051 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:39:30.486816    4051 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:39:30.488760    4051 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:39:30.488827    4051 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:39:30.490622    4051 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:39:30.490819    4051 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:39:30.492167    4051 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:39:30.492152    4051 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:39:30.494277    4051 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:39:30.494449    4051 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1014 07:39:30.495324    4051 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1014 07:39:30.495622    4051 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:39:30.497043    4051 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1014 07:39:30.497051    4051 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:39:30.498073    4051 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1014 07:39:30.498949    4051 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:39:31.119554    4051 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:39:31.126003    4051 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:39:31.137179    4051 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1014 07:39:31.137440    4051 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:39:31.137509    4051 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:39:31.138410    4051 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1014 07:39:31.138425    4051 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:39:31.138464    4051 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:39:31.146976    4051 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:39:31.149322    4051 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1014 07:39:31.150556    4051 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1014 07:39:31.160999    4051 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1014 07:39:31.161019    4051 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:39:31.161094    4051 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:39:31.172441    4051 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W1014 07:39:31.190472    4051 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1014 07:39:31.190811    4051 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:39:31.201636    4051 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1014 07:39:31.201659    4051 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:39:31.201725    4051 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:39:31.203611    4051 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1014 07:39:31.214783    4051 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1014 07:39:31.215219    4051 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1014 07:39:31.224557    4051 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1014 07:39:31.224577    4051 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1014 07:39:31.224581    4051 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1014 07:39:31.224606    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1014 07:39:31.224643    4051 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1014 07:39:31.261846    4051 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1014 07:39:31.262006    4051 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1014 07:39:31.278784    4051 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1014 07:39:31.278819    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1014 07:39:31.282819    4051 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1014 07:39:31.282836    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1014 07:39:31.329881    4051 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1014 07:39:31.329908    4051 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1014 07:39:31.329915    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1014 07:39:31.334264    4051 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1014 07:39:31.367205    4051 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1014 07:39:31.367279    4051 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1014 07:39:31.367302    4051 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1014 07:39:31.367379    4051 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1014 07:39:31.379183    4051 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1014 07:39:31.379336    4051 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1014 07:39:31.380994    4051 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1014 07:39:31.381031    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1014 07:39:31.387128    4051 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:39:31.411247    4051 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1014 07:39:31.411273    4051 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:39:31.411344    4051 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:39:31.437799    4051 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W1014 07:39:31.479425    4051 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1014 07:39:31.479555    4051 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:39:31.521619    4051 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1014 07:39:31.521640    4051 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:39:31.521705    4051 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:39:31.679399    4051 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1014 07:39:31.679413    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1014 07:39:32.252921    4051 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1014 07:39:32.252972    4051 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1014 07:39:32.253073    4051 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1014 07:39:32.254757    4051 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1014 07:39:32.254777    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1014 07:39:32.292782    4051 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1014 07:39:32.292808    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1014 07:39:32.537160    4051 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1014 07:39:32.537197    4051 cache_images.go:92] duration metric: took 2.058552375s to LoadCachedImages
	W1014 07:39:32.537466    4051 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I1014 07:39:32.537475    4051 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1014 07:39:32.537532    4051 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-116000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-116000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:39:32.537610    4051 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:39:32.554318    4051 cni.go:84] Creating CNI manager for ""
	I1014 07:39:32.554333    4051 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:39:32.556089    4051 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:39:32.556112    4051 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-116000 NodeName:running-upgrade-116000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:39:32.556202    4051 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-116000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:39:32.556289    4051 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1014 07:39:32.559920    4051 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:39:32.559958    4051 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 07:39:32.563188    4051 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1014 07:39:32.568317    4051 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:39:32.573502    4051 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1014 07:39:32.578539    4051 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1014 07:39:32.579971    4051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:39:32.644104    4051 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:39:32.649663    4051 certs.go:68] Setting up /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000 for IP: 10.0.2.15
	I1014 07:39:32.651268    4051 certs.go:194] generating shared ca certs ...
	I1014 07:39:32.651283    4051 certs.go:226] acquiring lock for ca certs: {Name:mk8f9f58f46caac35c7cea538c3ba1c75987d64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:39:32.651517    4051 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19790-979/.minikube/ca.key
	I1014 07:39:32.651576    4051 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19790-979/.minikube/proxy-client-ca.key
	I1014 07:39:32.651757    4051 certs.go:256] generating profile certs ...
	I1014 07:39:32.652013    4051 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/client.key
	I1014 07:39:32.652034    4051 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.key.081d17ca
	I1014 07:39:32.652448    4051 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.crt.081d17ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1014 07:39:32.803283    4051 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.crt.081d17ca ...
	I1014 07:39:32.803291    4051 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.crt.081d17ca: {Name:mkcd8ccb6bb6e6e7f72fe5aa5e1b42e9198a850e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:39:32.803745    4051 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.key.081d17ca ...
	I1014 07:39:32.803753    4051 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.key.081d17ca: {Name:mkdda396c73175e17cdb502173e1db83a6d7c239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:39:32.803952    4051 certs.go:381] copying /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.crt.081d17ca -> /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.crt
	I1014 07:39:32.804112    4051 certs.go:385] copying /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.key.081d17ca -> /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.key
	I1014 07:39:32.806349    4051 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/proxy-client.key
	I1014 07:39:32.806524    4051 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/1497.pem (1338 bytes)
	W1014 07:39:32.806565    4051 certs.go:480] ignoring /Users/jenkins/minikube-integration/19790-979/.minikube/certs/1497_empty.pem, impossibly tiny 0 bytes
	I1014 07:39:32.806572    4051 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 07:39:32.806609    4051 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem (1078 bytes)
	I1014 07:39:32.806649    4051 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem (1123 bytes)
	I1014 07:39:32.806680    4051 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/key.pem (1675 bytes)
	I1014 07:39:32.806747    4051 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem (1708 bytes)
	I1014 07:39:32.807844    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:39:32.815733    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 07:39:32.823298    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:39:32.830384    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:39:32.837056    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 07:39:32.843950    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:39:32.851184    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:39:32.860480    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 07:39:32.867751    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem --> /usr/share/ca-certificates/14972.pem (1708 bytes)
	I1014 07:39:32.874546    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:39:32.881967    4051 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/certs/1497.pem --> /usr/share/ca-certificates/1497.pem (1338 bytes)
	I1014 07:39:32.890800    4051 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:39:32.930240    4051 ssh_runner.go:195] Run: openssl version
	I1014 07:39:32.934716    4051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14972.pem && ln -fs /usr/share/ca-certificates/14972.pem /etc/ssl/certs/14972.pem"
	I1014 07:39:32.941453    4051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14972.pem
	I1014 07:39:32.950381    4051 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:46 /usr/share/ca-certificates/14972.pem
	I1014 07:39:32.950444    4051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14972.pem
	I1014 07:39:32.953389    4051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14972.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:39:32.956762    4051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:39:32.961965    4051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:39:32.965959    4051 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:39:32.966011    4051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:39:32.968383    4051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:39:32.973867    4051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1497.pem && ln -fs /usr/share/ca-certificates/1497.pem /etc/ssl/certs/1497.pem"
	I1014 07:39:32.979030    4051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1497.pem
	I1014 07:39:32.981391    4051 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:46 /usr/share/ca-certificates/1497.pem
	I1014 07:39:32.981438    4051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1497.pem
	I1014 07:39:32.983849    4051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1497.pem /etc/ssl/certs/51391683.0"
	I1014 07:39:32.990155    4051 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:39:32.991980    4051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 07:39:32.994272    4051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 07:39:32.996189    4051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 07:39:32.998476    4051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 07:39:33.000906    4051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 07:39:33.002839    4051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 07:39:33.004821    4051 kubeadm.go:392] StartCluster: {Name:running-upgrade-116000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61423 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-116000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1014 07:39:33.004923    4051 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:39:33.029020    4051 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:39:33.035615    4051 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 07:39:33.035644    4051 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 07:39:33.035706    4051 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 07:39:33.057130    4051 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:39:33.058218    4051 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-116000" does not appear in /Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:39:33.058304    4051 kubeconfig.go:62] /Users/jenkins/minikube-integration/19790-979/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-116000" cluster setting kubeconfig missing "running-upgrade-116000" context setting]
	I1014 07:39:33.058454    4051 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/kubeconfig: {Name:mkbe79fce3a1d9ddd6036a978e097f20767985b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:39:33.059171    4051 kapi.go:59] client config for running-upgrade-116000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/client.key", CAFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10257ae40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:39:33.067809    4051 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 07:39:33.079164    4051 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-116000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1014 07:39:33.079175    4051 kubeadm.go:1160] stopping kube-system containers ...
	I1014 07:39:33.079253    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:39:33.128684    4051 docker.go:483] Stopping containers: [fce38beefca3 0e533dea1b9a 3a3b8e55d4b1 8d38478d100a 1471b6000312 d0eb43f50e1b 65905c99c44e dc7f520695cc 039dbafcce2e a616c1d98c61 8bc39f480b91 7e66b3d84ea5 65db67265792 e9d37134b4f6 2619e07acb13 d558c10aaab6 e75dd76f8838 1f74f4cccd3e]
	I1014 07:39:33.128772    4051 ssh_runner.go:195] Run: docker stop fce38beefca3 0e533dea1b9a 3a3b8e55d4b1 8d38478d100a 1471b6000312 d0eb43f50e1b 65905c99c44e dc7f520695cc 039dbafcce2e a616c1d98c61 8bc39f480b91 7e66b3d84ea5 65db67265792 e9d37134b4f6 2619e07acb13 d558c10aaab6 e75dd76f8838 1f74f4cccd3e
	I1014 07:39:33.328832    4051 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 07:39:33.404111    4051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:39:33.408334    4051 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct 14 14:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Oct 14 14:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 14 14:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 14 14:38 /etc/kubernetes/scheduler.conf
	
	I1014 07:39:33.408381    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/admin.conf
	I1014 07:39:33.411815    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:39:33.411846    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:39:33.415026    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/kubelet.conf
	I1014 07:39:33.417692    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:39:33.417721    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:39:33.420364    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/controller-manager.conf
	I1014 07:39:33.423561    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:39:33.423594    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:39:33.426547    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/scheduler.conf
	I1014 07:39:33.429280    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:39:33.429316    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:39:33.432205    4051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:39:33.435528    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:39:33.465493    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:39:33.840954    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:39:34.088373    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:39:34.112243    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:39:34.137570    4051 api_server.go:52] waiting for apiserver process to appear ...
	I1014 07:39:34.137655    4051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:39:34.640045    4051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:39:35.140048    4051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:39:35.639673    4051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:39:35.643941    4051 api_server.go:72] duration metric: took 1.506406167s to wait for apiserver process to appear ...
	I1014 07:39:35.643949    4051 api_server.go:88] waiting for apiserver healthz status ...
	I1014 07:39:35.644179    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:39:40.646540    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:39:40.646639    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:39:45.647650    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:39:45.647736    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:39:50.648599    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:39:50.648630    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:39:55.649306    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:39:55.649335    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:00.650739    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:00.650829    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:05.653341    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:05.653376    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:10.653571    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:10.653590    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:15.653953    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:15.654072    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:20.656551    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:20.656590    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:25.658826    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:25.658905    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:30.661425    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:30.661465    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:35.663625    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:35.663970    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:40:35.692404    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:40:35.692549    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:40:35.713619    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:40:35.713724    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:40:35.727709    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:40:35.727792    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:40:35.743175    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:40:35.743255    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:40:35.754551    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:40:35.754631    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:40:35.765542    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:40:35.765636    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:40:35.777630    4051 logs.go:282] 0 containers: []
	W1014 07:40:35.777641    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:40:35.777711    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:40:35.788984    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:40:35.789014    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:40:35.789020    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:40:35.800560    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:40:35.800572    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:40:35.827181    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:40:35.827188    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:40:35.841461    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:40:35.841472    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:40:35.881725    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:40:35.881733    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:40:35.894935    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:40:35.894946    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:40:35.906757    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:40:35.906767    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:40:35.918247    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:40:35.918262    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:40:35.932518    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:40:35.932531    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:40:35.946137    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:40:35.946148    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:40:35.957968    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:40:35.957977    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:40:35.976700    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:40:35.976719    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:40:35.988511    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:40:35.988522    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:40:36.000233    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:40:36.000247    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:40:36.004799    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:40:36.004808    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:40:36.112629    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:40:36.112642    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:40:36.126868    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:40:36.126885    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:40:38.640705    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:43.643422    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:43.643659    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:40:43.661444    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:40:43.661541    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:40:43.677547    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:40:43.677631    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:40:43.687976    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:40:43.688060    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:40:43.702571    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:40:43.702661    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:40:43.713555    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:40:43.713639    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:40:43.724549    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:40:43.724625    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:40:43.734997    4051 logs.go:282] 0 containers: []
	W1014 07:40:43.735007    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:40:43.735085    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:40:43.746071    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:40:43.746090    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:40:43.746095    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:40:43.757843    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:40:43.757854    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:40:43.770260    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:40:43.770275    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:40:43.799923    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:40:43.799933    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:40:43.811398    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:40:43.811409    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:40:43.838084    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:40:43.838096    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:40:43.850778    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:40:43.850788    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:40:43.862202    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:40:43.862214    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:40:43.876697    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:40:43.876708    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:40:43.881129    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:40:43.881137    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:40:43.918187    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:40:43.918197    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:40:43.932215    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:40:43.932225    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:40:43.944369    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:40:43.944381    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:40:43.955895    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:40:43.955907    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:40:43.998714    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:40:43.998722    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:40:44.010991    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:40:44.011001    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:40:44.032775    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:40:44.032786    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:40:46.546240    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:51.548629    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:51.548873    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:40:51.567849    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:40:51.567944    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:40:51.581566    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:40:51.581652    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:40:51.593178    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:40:51.593262    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:40:51.604316    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:40:51.604411    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:40:51.615092    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:40:51.615182    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:40:51.626239    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:40:51.626314    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:40:51.637013    4051 logs.go:282] 0 containers: []
	W1014 07:40:51.637025    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:40:51.637107    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:40:51.647619    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:40:51.647636    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:40:51.647641    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:40:51.661687    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:40:51.661701    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:40:51.675245    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:40:51.675258    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:40:51.686423    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:40:51.686434    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:40:51.698382    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:40:51.698393    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:40:51.709454    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:40:51.709470    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:40:51.720810    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:40:51.720823    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:40:51.761198    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:40:51.761206    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:40:51.772463    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:40:51.772477    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:40:51.784249    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:40:51.784259    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:40:51.811745    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:40:51.811752    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:40:51.816678    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:40:51.816686    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:40:51.854692    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:40:51.854702    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:40:51.878831    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:40:51.878844    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:40:51.896196    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:40:51.896207    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:40:51.910262    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:40:51.910275    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:40:51.922343    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:40:51.922354    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:40:54.435777    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:59.436441    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:59.436619    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:40:59.448475    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:40:59.448574    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:40:59.459180    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:40:59.459263    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:40:59.469685    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:40:59.469764    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:40:59.480206    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:40:59.480274    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:40:59.490960    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:40:59.491038    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:40:59.501217    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:40:59.501288    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:40:59.512256    4051 logs.go:282] 0 containers: []
	W1014 07:40:59.512269    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:40:59.512342    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:40:59.525532    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:40:59.525553    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:40:59.525561    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:40:59.529932    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:40:59.529941    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:40:59.565496    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:40:59.565509    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:40:59.581089    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:40:59.581106    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:40:59.592675    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:40:59.592687    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:40:59.608665    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:40:59.608676    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:40:59.621318    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:40:59.621330    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:40:59.632870    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:40:59.632885    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:40:59.644228    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:40:59.644245    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:40:59.655292    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:40:59.655304    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:40:59.666477    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:40:59.666490    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:40:59.683718    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:40:59.683732    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:40:59.710694    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:40:59.710701    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:40:59.723448    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:40:59.723459    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:40:59.767840    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:40:59.767860    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:40:59.781683    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:40:59.781692    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:40:59.796717    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:40:59.796732    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:02.310218    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:07.312613    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:07.312900    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:07.335657    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:07.335764    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:07.350881    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:07.350969    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:07.363249    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:07.363340    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:07.374404    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:07.374487    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:07.384930    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:07.385006    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:07.397232    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:07.397311    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:07.407575    4051 logs.go:282] 0 containers: []
	W1014 07:41:07.407586    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:07.407650    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:07.423949    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:07.423967    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:07.423972    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:07.438122    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:07.438135    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:07.451565    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:07.451576    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:07.467663    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:07.467674    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:07.496044    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:07.496058    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:07.508134    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:07.508148    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:07.549987    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:07.549997    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:07.585401    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:07.585414    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:07.597215    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:07.597228    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:07.608473    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:07.608490    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:07.625602    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:07.625614    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:07.630071    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:07.630079    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:07.641386    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:07.641400    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:07.652326    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:07.652339    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:07.666187    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:07.666200    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:07.677661    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:07.677674    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:07.689216    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:07.689232    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:10.202942    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:15.205266    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:15.205588    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:15.234028    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:15.234176    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:15.256245    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:15.256333    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:15.269346    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:15.269432    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:15.279939    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:15.280020    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:15.290568    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:15.290641    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:15.300952    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:15.301029    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:15.311070    4051 logs.go:282] 0 containers: []
	W1014 07:41:15.311082    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:15.311148    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:15.321499    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:15.321518    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:15.321523    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:15.335279    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:15.335289    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:15.346514    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:15.346529    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:15.358191    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:15.358199    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:15.375929    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:15.375940    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:15.388009    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:15.388020    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:15.414068    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:15.414081    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:15.454397    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:15.454405    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:15.459025    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:15.459034    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:15.494113    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:15.494126    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:15.508359    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:15.508374    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:15.519883    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:15.519896    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:15.531494    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:15.531505    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:15.545235    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:15.545247    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:15.556997    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:15.557010    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:15.568195    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:15.568208    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:15.580650    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:15.580663    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:18.093585    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:23.095696    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:23.095876    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:23.119641    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:23.119734    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:23.131990    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:23.132072    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:23.155780    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:23.155866    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:23.166355    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:23.166442    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:23.176583    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:23.176652    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:23.187856    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:23.187932    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:23.197947    4051 logs.go:282] 0 containers: []
	W1014 07:41:23.197958    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:23.198034    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:23.208184    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:23.208202    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:23.208209    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:23.232340    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:23.232348    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:23.236482    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:23.236491    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:23.250237    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:23.250250    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:23.261412    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:23.261424    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:23.279737    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:23.279747    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:23.295325    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:23.295334    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:23.306452    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:23.306463    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:23.348178    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:23.348194    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:23.360070    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:23.360085    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:23.371734    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:23.371744    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:23.383183    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:23.383200    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:23.395299    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:23.395314    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:23.434282    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:23.434293    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:23.452809    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:23.452819    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:23.463775    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:23.463785    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:23.474922    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:23.474941    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:25.988316    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:30.989709    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:30.989856    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:31.003335    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:31.003443    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:31.014741    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:31.014831    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:31.025440    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:31.025518    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:31.035788    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:31.035868    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:31.046162    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:31.046238    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:31.056555    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:31.056629    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:31.067349    4051 logs.go:282] 0 containers: []
	W1014 07:41:31.067362    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:31.067428    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:31.077525    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:31.077544    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:31.077552    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:31.114390    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:31.114401    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:31.128576    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:31.128588    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:31.146373    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:31.146382    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:31.150915    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:31.150922    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:31.168369    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:31.168381    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:31.212047    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:31.212058    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:31.226137    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:31.226148    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:31.242321    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:31.242332    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:31.253357    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:31.253368    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:31.279044    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:31.279054    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:31.291857    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:31.291866    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:31.303375    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:31.303385    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:31.314787    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:31.314797    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:31.326184    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:31.326195    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:31.337281    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:31.337294    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:31.349466    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:31.349480    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:33.862768    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:38.865131    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:38.865349    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:38.881858    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:38.881947    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:38.893681    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:38.893765    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:38.904567    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:38.904641    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:38.914939    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:38.915009    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:38.925705    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:38.925770    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:38.935906    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:38.935972    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:38.946369    4051 logs.go:282] 0 containers: []
	W1014 07:41:38.946385    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:38.946480    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:38.957322    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:38.957339    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:38.957344    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:38.968989    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:38.968998    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:39.011734    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:39.011747    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:39.025486    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:39.025497    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:39.037259    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:39.037270    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:39.048738    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:39.048751    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:39.061016    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:39.061027    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:39.072794    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:39.072806    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:39.077445    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:39.077453    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:39.091458    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:39.091471    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:39.103448    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:39.103460    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:39.121366    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:39.121376    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:39.132329    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:39.132342    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:39.156473    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:39.156482    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:39.168338    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:39.168351    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:39.205302    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:39.205313    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:39.219900    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:39.219912    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:41.733998    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:46.736381    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:46.736711    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:46.758951    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:46.759072    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:46.774316    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:46.774411    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:46.786954    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:46.787044    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:46.798877    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:46.798966    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:46.809683    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:46.809751    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:46.820821    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:46.820898    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:46.831009    4051 logs.go:282] 0 containers: []
	W1014 07:41:46.831019    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:46.831081    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:46.841253    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:46.841273    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:46.841278    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:46.853550    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:46.853562    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:46.865071    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:46.865081    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:46.878151    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:46.878165    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:46.893551    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:46.893562    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:46.905172    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:46.905182    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:46.916261    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:46.916273    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:46.956113    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:46.956123    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:46.960436    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:46.960442    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:47.002763    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:47.002773    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:47.014743    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:47.014753    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:47.031823    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:47.031833    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:47.057366    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:47.057373    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:47.068707    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:47.068721    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:47.084291    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:47.084305    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:47.096127    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:47.096139    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:47.107428    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:47.107439    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:49.620854    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:54.623039    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:54.623284    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:54.644543    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:54.644648    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:54.659174    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:54.659260    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:54.671576    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:54.671664    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:54.682110    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:54.682202    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:54.693585    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:54.693670    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:54.704126    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:54.704212    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:54.714117    4051 logs.go:282] 0 containers: []
	W1014 07:41:54.714129    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:54.714199    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:54.724698    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:54.724714    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:54.724719    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:54.764861    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:54.764871    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:54.775481    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:54.775495    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:54.787725    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:54.787739    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:54.798653    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:54.798667    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:54.810263    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:54.810276    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:54.825200    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:54.825218    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:54.837713    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:54.837725    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:54.862591    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:54.862598    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:54.866682    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:54.866688    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:54.880719    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:54.880732    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:54.895723    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:54.895733    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:54.907314    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:54.907328    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:54.918702    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:54.918712    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:54.935930    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:54.935940    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:54.974114    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:54.974127    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:54.987807    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:54.987819    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:57.500697    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:02.501577    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:02.501897    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:02.526977    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:02.527134    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:02.546066    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:02.546161    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:02.568267    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:02.568347    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:02.579360    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:02.579453    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:02.589937    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:02.590014    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:02.611883    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:02.611961    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:02.622197    4051 logs.go:282] 0 containers: []
	W1014 07:42:02.622209    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:02.622278    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:02.633197    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:02.633215    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:02.633221    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:02.638032    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:02.638041    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:02.651942    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:02.651951    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:02.663365    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:02.663376    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:02.706231    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:02.706240    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:02.721236    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:02.721246    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:02.733513    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:02.733526    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:02.751586    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:02.751596    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:02.762941    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:02.762953    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:02.774380    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:02.774394    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:02.786016    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:02.786027    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:02.797850    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:02.797861    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:02.832910    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:02.832921    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:02.845000    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:02.845012    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:02.857238    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:02.857249    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:02.868531    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:02.868543    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:02.880154    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:02.880163    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:05.408284    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:10.410860    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:10.411100    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:10.444029    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:10.444129    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:10.458538    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:10.458628    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:10.469965    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:10.470050    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:10.480448    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:10.480529    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:10.498625    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:10.498702    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:10.509725    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:10.509807    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:10.519880    4051 logs.go:282] 0 containers: []
	W1014 07:42:10.519890    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:10.519961    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:10.530144    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:10.530159    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:10.530165    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:10.541079    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:10.541092    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:10.552225    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:10.552240    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:10.576229    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:10.576236    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:10.587796    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:10.587806    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:10.624621    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:10.624631    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:10.636588    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:10.636598    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:10.648097    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:10.648109    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:10.659343    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:10.659356    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:10.681115    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:10.681128    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:10.693633    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:10.693645    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:10.698389    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:10.698398    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:10.709569    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:10.709581    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:10.723439    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:10.723449    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:10.764108    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:10.764121    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:10.778868    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:10.778884    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:10.790418    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:10.790435    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:13.304618    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:18.305990    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:18.306135    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:18.319705    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:18.319795    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:18.331024    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:18.331105    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:18.341550    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:18.341618    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:18.352197    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:18.352273    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:18.363283    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:18.363352    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:18.374440    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:18.374505    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:18.384684    4051 logs.go:282] 0 containers: []
	W1014 07:42:18.384697    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:18.384765    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:18.395655    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:18.395677    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:18.395683    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:18.399958    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:18.399965    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:18.411388    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:18.411401    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:18.434937    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:18.434943    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:18.475768    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:18.475775    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:18.516931    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:18.516944    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:18.528488    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:18.528504    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:18.539796    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:18.539808    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:18.552520    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:18.552534    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:18.563984    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:18.563995    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:18.575755    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:18.575767    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:18.594333    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:18.594344    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:18.608760    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:18.608770    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:18.620097    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:18.620107    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:18.631024    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:18.631037    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:18.642980    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:18.642990    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:18.663222    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:18.663231    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:21.179916    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:26.182307    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:26.182642    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:26.213504    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:26.213643    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:26.230976    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:26.231078    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:26.245023    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:26.245108    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:26.256702    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:26.256788    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:26.270831    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:26.270913    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:26.284388    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:26.284467    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:26.294705    4051 logs.go:282] 0 containers: []
	W1014 07:42:26.294716    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:26.294795    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:26.304847    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:26.304871    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:26.304877    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:26.316363    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:26.316373    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:26.330028    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:26.330043    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:26.373625    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:26.373635    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:26.388550    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:26.388560    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:26.400134    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:26.400146    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:26.411373    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:26.411386    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:26.433318    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:26.433329    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:26.444673    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:26.444684    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:26.470912    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:26.470923    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:26.506046    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:26.506058    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:26.520070    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:26.520083    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:26.535038    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:26.535048    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:26.547060    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:26.547070    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:26.557828    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:26.557850    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:26.562308    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:26.562314    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:26.580794    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:26.580805    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:29.095505    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:34.097629    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:34.097760    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:34.108848    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:34.108937    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:34.119587    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:34.119663    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:34.129677    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:34.129761    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:34.140686    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:34.140762    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:34.151949    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:34.152027    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:34.162923    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:34.162996    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:34.173220    4051 logs.go:282] 0 containers: []
	W1014 07:42:34.173237    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:34.173307    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:34.199541    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:34.199558    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:34.199563    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:34.218805    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:34.218821    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:34.231750    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:34.231763    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:34.242826    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:34.242837    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:34.246992    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:34.246999    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:34.282571    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:34.282583    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:34.296249    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:34.296260    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:34.307708    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:34.307717    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:34.318595    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:34.318608    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:34.343633    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:34.343640    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:34.355534    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:34.355547    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:34.398024    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:34.398030    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:34.412745    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:34.412763    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:34.423768    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:34.423780    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:34.435241    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:34.435253    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:34.458852    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:34.458864    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:34.473376    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:34.473385    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:36.988191    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:41.990395    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:41.990794    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:42.020537    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:42.020690    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:42.038620    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:42.038735    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:42.052994    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:42.053089    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:42.065126    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:42.065209    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:42.076006    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:42.076086    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:42.088555    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:42.088627    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:42.099802    4051 logs.go:282] 0 containers: []
	W1014 07:42:42.099815    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:42.099888    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:42.110580    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:42.110600    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:42.110606    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:42.121666    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:42.121678    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:42.135183    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:42.135195    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:42.152746    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:42.152757    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:42.195792    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:42.195805    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:42.233864    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:42.233876    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:42.248409    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:42.248420    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:42.262077    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:42.262090    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:42.273958    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:42.273970    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:42.278527    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:42.278534    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:42.289601    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:42.289613    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:42.301377    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:42.301392    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:42.317442    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:42.317452    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:42.329031    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:42.329043    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:42.340058    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:42.340074    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:42.351183    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:42.351193    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:42.362399    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:42.362409    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:44.889230    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:49.891725    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:49.891914    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:49.907914    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:49.908006    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:49.920408    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:49.920487    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:49.930966    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:49.931049    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:49.941322    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:49.941403    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:49.952156    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:49.952236    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:49.970498    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:49.970567    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:49.986279    4051 logs.go:282] 0 containers: []
	W1014 07:42:49.986296    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:49.986362    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:50.006352    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:50.006369    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:50.006376    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:50.018134    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:50.018155    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:50.029537    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:50.029548    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:50.041519    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:50.041530    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:50.058791    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:50.058800    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:50.095347    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:50.095357    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:50.107143    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:50.107156    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:50.122827    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:50.122842    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:50.127273    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:50.127280    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:50.143087    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:50.143097    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:50.154252    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:50.154264    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:50.177697    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:50.177705    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:50.193226    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:50.193237    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:50.235140    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:50.235149    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:50.248533    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:50.248542    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:50.261022    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:50.261033    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:50.272687    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:50.272697    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:52.786462    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:57.787978    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:57.788406    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:57.821811    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:57.821952    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:57.843708    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:57.843808    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:57.868090    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:57.868182    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:57.891983    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:57.892064    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:57.910073    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:57.910156    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:57.920739    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:57.920816    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:57.931872    4051 logs.go:282] 0 containers: []
	W1014 07:42:57.931883    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:57.931947    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:57.942401    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:57.942421    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:57.942426    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:57.946702    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:57.946710    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:57.961687    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:57.961699    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:57.986211    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:57.986224    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:57.997849    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:57.997859    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:58.012586    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:58.012594    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:58.024177    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:58.024187    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:58.048612    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:58.048622    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:58.089102    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:58.089114    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:58.102943    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:58.102953    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:58.120877    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:58.120888    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:58.132620    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:58.132634    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:58.144721    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:58.144731    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:58.156373    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:58.156384    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:58.195981    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:58.195991    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:58.210951    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:58.210964    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:58.222763    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:58.222773    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:43:00.736887    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:05.738153    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:05.738268    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:05.749211    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:43:05.749295    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:05.759917    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:43:05.759993    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:05.770105    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:43:05.770181    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:05.780787    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:43:05.780870    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:05.791135    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:43:05.791208    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:05.801815    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:43:05.801897    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:05.812406    4051 logs.go:282] 0 containers: []
	W1014 07:43:05.812423    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:05.812496    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:05.822911    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:43:05.822929    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:05.822935    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:05.858664    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:43:05.858675    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:43:05.872583    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:43:05.872601    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:43:05.886560    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:43:05.886572    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:43:05.898269    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:05.898285    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:05.922503    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:43:05.922513    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:05.935955    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:05.935966    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:05.976681    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:43:05.976688    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:43:05.987750    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:43:05.987762    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:43:06.003258    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:43:06.003268    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:43:06.015094    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:43:06.015107    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:43:06.027051    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:43:06.027061    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:43:06.038111    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:06.038122    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:06.042535    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:43:06.042541    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:43:06.053959    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:43:06.053972    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:43:06.065150    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:43:06.065163    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:43:06.083155    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:43:06.083172    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:43:08.596424    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:13.598893    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:13.599109    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:13.618137    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:43:13.618248    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:13.631561    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:43:13.631650    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:13.643818    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:43:13.643896    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:13.654598    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:43:13.654680    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:13.664761    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:43:13.664848    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:13.676055    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:43:13.676136    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:13.686433    4051 logs.go:282] 0 containers: []
	W1014 07:43:13.686444    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:13.686511    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:13.697133    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:43:13.697152    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:43:13.697158    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:43:13.708226    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:43:13.708238    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:43:13.719827    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:13.719837    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:13.743211    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:43:13.743219    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:43:13.757140    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:43:13.757155    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:43:13.774674    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:43:13.774685    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:43:13.785516    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:43:13.785527    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:43:13.797190    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:13.797200    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:13.831646    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:43:13.831656    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:43:13.843182    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:43:13.843197    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:43:13.854285    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:43:13.854303    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:13.867068    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:13.867079    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:13.910602    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:13.910609    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:13.914782    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:43:13.914790    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:43:13.925991    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:43:13.926003    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:43:13.938015    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:43:13.938026    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:43:13.952045    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:43:13.952056    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:43:16.471728    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:21.473936    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:21.474373    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:21.514613    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:43:21.514748    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:21.531869    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:43:21.531969    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:21.545437    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:43:21.545525    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:21.556575    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:43:21.556658    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:21.567746    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:43:21.567826    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:21.578614    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:43:21.578681    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:21.589554    4051 logs.go:282] 0 containers: []
	W1014 07:43:21.589566    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:21.589637    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:21.600644    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:43:21.600662    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:43:21.600668    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:43:21.626577    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:21.626591    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:21.649136    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:43:21.649143    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:21.660743    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:43:21.660753    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:43:21.672816    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:43:21.672827    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:43:21.684889    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:43:21.684902    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:43:21.696828    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:43:21.696843    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:43:21.708498    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:43:21.708509    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:43:21.719944    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:43:21.719955    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:43:21.735120    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:43:21.735134    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:43:21.746549    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:43:21.746564    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:43:21.757765    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:21.757775    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:21.761902    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:43:21.761907    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:43:21.775620    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:43:21.775630    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:43:21.787658    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:21.787672    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:21.827960    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:21.827968    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:21.864904    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:43:21.864916    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:43:24.381797    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:29.382661    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:29.382850    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:29.407212    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:43:29.407305    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:29.419704    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:43:29.419789    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:29.429895    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:43:29.429976    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:29.440641    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:43:29.440729    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:29.451337    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:43:29.451409    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:29.462572    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:43:29.462649    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:29.472769    4051 logs.go:282] 0 containers: []
	W1014 07:43:29.472780    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:29.472843    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:29.483713    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:43:29.483732    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:43:29.483739    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:43:29.504026    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:43:29.504036    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:43:29.516118    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:43:29.516127    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:43:29.533214    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:43:29.533223    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:29.546383    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:43:29.546393    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:43:29.558308    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:43:29.558321    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:43:29.572377    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:43:29.572388    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:43:29.583688    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:43:29.583699    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:43:29.603486    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:43:29.603497    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:43:29.615402    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:29.615411    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:29.652367    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:43:29.652381    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:43:29.663608    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:43:29.663622    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:43:29.680017    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:29.680029    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:29.704137    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:29.704144    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:29.746124    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:29.746132    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:29.750415    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:43:29.750423    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:43:29.761937    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:43:29.761947    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:43:32.275549    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:37.277851    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:37.277945    4051 kubeadm.go:597] duration metric: took 4m4.247775583s to restartPrimaryControlPlane
	W1014 07:43:37.278023    4051 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 07:43:37.278052    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1014 07:43:38.307452    4051 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.029407084s)
	I1014 07:43:38.307537    4051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:43:38.312636    4051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:43:38.315464    4051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:43:38.318390    4051 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:43:38.318397    4051 kubeadm.go:157] found existing configuration files:
	
	I1014 07:43:38.318426    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/admin.conf
	I1014 07:43:38.321091    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:43:38.321119    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:43:38.323770    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/kubelet.conf
	I1014 07:43:38.326997    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:43:38.327025    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:43:38.330293    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/controller-manager.conf
	I1014 07:43:38.332948    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:43:38.332977    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:43:38.335560    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/scheduler.conf
	I1014 07:43:38.338947    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:43:38.338979    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:43:38.341895    4051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:43:38.360758    4051 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1014 07:43:38.360819    4051 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:43:38.409362    4051 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:43:38.409419    4051 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:43:38.409465    4051 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 07:43:38.461932    4051 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:43:38.466102    4051 out.go:235]   - Generating certificates and keys ...
	I1014 07:43:38.466199    4051 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:43:38.466304    4051 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:43:38.466454    4051 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 07:43:38.466514    4051 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 07:43:38.466556    4051 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 07:43:38.466602    4051 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 07:43:38.466637    4051 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 07:43:38.466673    4051 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 07:43:38.466809    4051 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 07:43:38.466936    4051 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 07:43:38.466992    4051 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 07:43:38.467066    4051 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:43:38.513724    4051 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:43:38.656411    4051 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:43:38.745475    4051 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:43:39.046344    4051 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:43:39.078756    4051 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:43:39.079149    4051 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:43:39.079283    4051 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:43:39.164836    4051 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:43:39.167882    4051 out.go:235]   - Booting up control plane ...
	I1014 07:43:39.167927    4051 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:43:39.167967    4051 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:43:39.169800    4051 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:43:39.170012    4051 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:43:39.170780    4051 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 07:43:43.172171    4051 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001445 seconds
	I1014 07:43:43.172230    4051 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:43:43.175756    4051 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:43:43.685275    4051 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:43:43.685382    4051 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-116000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:43:44.193349    4051 kubeadm.go:310] [bootstrap-token] Using token: tk7xbh.c8eu9acuhz8aq2dm
	I1014 07:43:44.199658    4051 out.go:235]   - Configuring RBAC rules ...
	I1014 07:43:44.199733    4051 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:43:44.199784    4051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:43:44.206194    4051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:43:44.207101    4051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:43:44.208125    4051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:43:44.209182    4051 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:43:44.213174    4051 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:43:44.410044    4051 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:43:44.598535    4051 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:43:44.598987    4051 kubeadm.go:310] 
	I1014 07:43:44.599019    4051 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:43:44.599022    4051 kubeadm.go:310] 
	I1014 07:43:44.599086    4051 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:43:44.599093    4051 kubeadm.go:310] 
	I1014 07:43:44.599105    4051 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:43:44.599135    4051 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:43:44.599168    4051 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:43:44.599191    4051 kubeadm.go:310] 
	I1014 07:43:44.599233    4051 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:43:44.599238    4051 kubeadm.go:310] 
	I1014 07:43:44.599263    4051 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:43:44.599268    4051 kubeadm.go:310] 
	I1014 07:43:44.599295    4051 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:43:44.599349    4051 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:43:44.599434    4051 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:43:44.599439    4051 kubeadm.go:310] 
	I1014 07:43:44.599488    4051 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:43:44.599558    4051 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:43:44.599562    4051 kubeadm.go:310] 
	I1014 07:43:44.599622    4051 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tk7xbh.c8eu9acuhz8aq2dm \
	I1014 07:43:44.599691    4051 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:faabd13cfdf25c259cb25d1f4d857023428bd020fe52b3b863fea78f48891e14 \
	I1014 07:43:44.599704    4051 kubeadm.go:310] 	--control-plane 
	I1014 07:43:44.599707    4051 kubeadm.go:310] 
	I1014 07:43:44.599774    4051 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:43:44.599780    4051 kubeadm.go:310] 
	I1014 07:43:44.599828    4051 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tk7xbh.c8eu9acuhz8aq2dm \
	I1014 07:43:44.599882    4051 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:faabd13cfdf25c259cb25d1f4d857023428bd020fe52b3b863fea78f48891e14 
	I1014 07:43:44.600013    4051 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:43:44.600036    4051 cni.go:84] Creating CNI manager for ""
	I1014 07:43:44.600046    4051 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:43:44.602834    4051 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 07:43:44.609798    4051 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 07:43:44.612728    4051 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 07:43:44.617383    4051 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:43:44.617433    4051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:43:44.617648    4051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-116000 minikube.k8s.io/updated_at=2024_10_14T07_43_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=running-upgrade-116000 minikube.k8s.io/primary=true
	I1014 07:43:44.660096    4051 kubeadm.go:1113] duration metric: took 42.707709ms to wait for elevateKubeSystemPrivileges
	I1014 07:43:44.660116    4051 ops.go:34] apiserver oom_adj: -16
	I1014 07:43:44.666395    4051 kubeadm.go:394] duration metric: took 4m11.667224s to StartCluster
	I1014 07:43:44.666412    4051 settings.go:142] acquiring lock: {Name:mk5f137d4011ca4bbc3c8514f15406fc4b6b595c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:43:44.666525    4051 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:43:44.666966    4051 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/kubeconfig: {Name:mkbe79fce3a1d9ddd6036a978e097f20767985b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:43:44.667339    4051 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:43:44.667398    4051 config.go:182] Loaded profile config "running-upgrade-116000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1014 07:43:44.667387    4051 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:43:44.667497    4051 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-116000"
	I1014 07:43:44.667504    4051 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-116000"
	W1014 07:43:44.667507    4051 addons.go:243] addon storage-provisioner should already be in state true
	I1014 07:43:44.667521    4051 host.go:66] Checking if "running-upgrade-116000" exists ...
	I1014 07:43:44.667535    4051 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-116000"
	I1014 07:43:44.667564    4051 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-116000"
	I1014 07:43:44.668940    4051 kapi.go:59] client config for running-upgrade-116000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/client.key", CAFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10257ae40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:43:44.669337    4051 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-116000"
	W1014 07:43:44.669342    4051 addons.go:243] addon default-storageclass should already be in state true
	I1014 07:43:44.669349    4051 host.go:66] Checking if "running-upgrade-116000" exists ...
	I1014 07:43:44.671736    4051 out.go:177] * Verifying Kubernetes components...
	I1014 07:43:44.672129    4051 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:43:44.675898    4051 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:43:44.675905    4051 sshutil.go:53] new ssh client: &{IP:localhost Port:61391 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/running-upgrade-116000/id_rsa Username:docker}
	I1014 07:43:44.679741    4051 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:43:44.683751    4051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:43:44.687853    4051 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:43:44.687869    4051 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:43:44.687881    4051 sshutil.go:53] new ssh client: &{IP:localhost Port:61391 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/running-upgrade-116000/id_rsa Username:docker}
	I1014 07:43:44.783103    4051 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:43:44.788623    4051 api_server.go:52] waiting for apiserver process to appear ...
	I1014 07:43:44.788678    4051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:43:44.792406    4051 api_server.go:72] duration metric: took 125.057209ms to wait for apiserver process to appear ...
	I1014 07:43:44.792415    4051 api_server.go:88] waiting for apiserver healthz status ...
	I1014 07:43:44.792422    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:44.839656    4051 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:43:44.862859    4051 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:43:45.154848    4051 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:43:45.154860    4051 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:43:49.794425    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:49.794505    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:54.794875    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:54.794905    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:59.795062    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:59.795079    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:04.804478    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:04.804543    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:09.811837    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:09.811889    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:14.817504    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:14.817545    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1014 07:44:15.178440    4051 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1014 07:44:15.186836    4051 out.go:177] * Enabled addons: storage-provisioner
	I1014 07:44:15.192315    4051 addons.go:510] duration metric: took 30.504722417s for enable addons: enabled=[storage-provisioner]
	I1014 07:44:19.822038    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:19.822062    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:24.825811    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:24.825838    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:29.829182    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:29.829219    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:34.832630    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:34.832649    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:39.834231    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:39.834284    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:44.837222    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:44.837361    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:44:44.848499    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:44:44.848583    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:44:44.859452    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:44:44.859531    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:44:44.870390    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:44:44.870466    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:44:44.880834    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:44:44.880912    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:44:44.891622    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:44:44.891709    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:44:44.902232    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:44:44.902312    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:44:44.916770    4051 logs.go:282] 0 containers: []
	W1014 07:44:44.916778    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:44:44.916842    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:44:44.927653    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:44:44.927668    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:44:44.927674    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:44:44.941697    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:44:44.941710    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:44:44.953471    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:44:44.953484    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:44:44.969467    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:44:44.969476    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:44:44.983463    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:44:44.983476    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:44:44.987852    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:44:44.987859    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:44:45.023676    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:44:45.023686    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:44:45.038326    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:44:45.038339    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:44:45.050711    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:44:45.050724    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:44:45.063426    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:44:45.063436    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:44:45.080853    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:44:45.080866    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:44:45.092578    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:44:45.092592    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:44:45.117360    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:44:45.117367    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:44:47.655679    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:52.658272    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:52.658399    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:44:52.669789    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:44:52.669875    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:44:52.680038    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:44:52.680117    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:44:52.690695    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:44:52.690771    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:44:52.701426    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:44:52.701507    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:44:52.711860    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:44:52.711947    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:44:52.725950    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:44:52.726030    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:44:52.736569    4051 logs.go:282] 0 containers: []
	W1014 07:44:52.736583    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:44:52.736650    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:44:52.747833    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:44:52.747849    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:44:52.747855    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:44:52.768233    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:44:52.768242    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:44:52.779785    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:44:52.779794    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:44:52.806857    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:44:52.806873    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:44:52.841903    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:44:52.841913    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:44:52.879500    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:44:52.879513    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:44:52.893968    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:44:52.893978    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:44:52.905639    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:44:52.905650    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:44:52.917548    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:44:52.917557    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:44:52.922654    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:44:52.922661    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:44:52.937116    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:44:52.937127    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:44:52.948920    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:44:52.948929    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:44:52.970801    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:44:52.970811    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:44:55.485227    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:00.487680    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:00.487793    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:00.499343    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:00.499425    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:00.511380    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:00.511472    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:00.525459    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:00.525544    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:00.536598    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:00.536667    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:00.548233    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:00.548316    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:00.561515    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:00.561594    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:00.572417    4051 logs.go:282] 0 containers: []
	W1014 07:45:00.572429    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:00.572494    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:00.582967    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:00.582984    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:00.582990    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:00.595060    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:00.595071    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:00.631629    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:00.631640    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:00.648073    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:00.648083    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:00.665030    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:00.665040    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:00.676159    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:00.676170    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:00.687637    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:00.687648    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:00.702279    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:00.702289    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:00.714102    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:00.714112    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:00.731441    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:00.731451    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:00.765326    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:00.765337    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:00.770085    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:00.770090    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:00.781512    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:00.781521    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:03.306607    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:08.308950    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:08.309187    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:08.327344    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:08.327449    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:08.340646    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:08.340726    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:08.352600    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:08.352674    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:08.364886    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:08.364969    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:08.375932    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:08.376014    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:08.391799    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:08.391877    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:08.402225    4051 logs.go:282] 0 containers: []
	W1014 07:45:08.402241    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:08.402311    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:08.412556    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:08.412579    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:08.412585    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:08.436227    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:08.436236    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:08.448153    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:08.448163    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:08.463078    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:08.463090    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:08.476235    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:08.476248    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:08.487888    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:08.487900    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:08.499693    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:08.499703    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:08.513911    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:08.513923    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:08.533964    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:08.533974    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:08.569297    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:08.569306    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:08.573453    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:08.573459    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:08.610304    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:08.610314    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:08.624516    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:08.624527    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:11.141904    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:16.144121    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:16.144304    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:16.157011    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:16.157101    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:16.167586    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:16.167660    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:16.177941    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:16.178007    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:16.188802    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:16.188865    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:16.199079    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:16.199157    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:16.210084    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:16.210158    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:16.220025    4051 logs.go:282] 0 containers: []
	W1014 07:45:16.220036    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:16.220107    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:16.230483    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:16.230500    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:16.230507    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:16.235554    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:16.235561    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:16.270627    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:16.270638    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:16.285153    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:16.285163    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:16.296940    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:16.296952    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:16.308904    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:16.308915    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:16.333532    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:16.333542    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:16.370595    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:16.370606    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:16.384785    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:16.384796    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:16.396771    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:16.396782    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:16.408196    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:16.408207    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:16.426200    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:16.426210    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:16.443554    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:16.443564    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:18.957162    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:23.959339    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:23.959438    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:23.971321    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:23.971398    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:23.983018    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:23.983091    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:23.995086    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:23.995166    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:24.007099    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:24.007177    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:24.017937    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:24.018015    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:24.029086    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:24.029161    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:24.040461    4051 logs.go:282] 0 containers: []
	W1014 07:45:24.040474    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:24.040538    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:24.055086    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:24.055102    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:24.055107    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:24.092229    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:24.092236    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:24.096694    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:24.096701    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:24.135963    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:24.135974    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:24.151056    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:24.151066    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:24.169817    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:24.169827    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:24.181872    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:24.181883    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:24.196491    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:24.196502    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:24.210189    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:24.210200    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:24.222308    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:24.222318    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:24.242117    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:24.242126    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:24.255143    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:24.255152    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:24.278679    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:24.278692    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:26.792901    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:31.795129    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:31.795228    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:31.806768    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:31.806837    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:31.818126    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:31.818209    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:31.830340    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:31.830412    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:31.842419    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:31.842495    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:31.854176    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:31.854270    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:31.865705    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:31.865787    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:31.876622    4051 logs.go:282] 0 containers: []
	W1014 07:45:31.876635    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:31.876704    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:31.887262    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:31.887278    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:31.887283    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:31.899602    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:31.899611    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:31.915520    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:31.915536    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:31.927755    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:31.927765    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:31.950845    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:31.950853    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:31.985171    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:31.985179    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:31.989489    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:31.989494    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:32.031130    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:32.031141    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:32.045997    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:32.046012    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:32.058682    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:32.058696    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:32.074190    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:32.074201    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:32.086646    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:32.086657    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:32.105027    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:32.105040    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:34.619644    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:39.621821    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:39.621920    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:39.637441    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:39.637522    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:39.648535    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:39.648616    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:39.659997    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:39.660109    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:39.674449    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:39.674519    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:39.694919    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:39.694992    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:39.705761    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:39.705839    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:39.716219    4051 logs.go:282] 0 containers: []
	W1014 07:45:39.716230    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:39.716296    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:39.727397    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:39.727412    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:39.727418    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:39.745502    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:39.745511    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:39.759272    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:39.759283    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:39.785332    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:39.785340    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:39.821171    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:39.821186    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:39.826260    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:39.826268    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:39.838535    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:39.838547    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:39.855389    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:39.855400    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:39.871490    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:39.871501    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:39.883724    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:39.883733    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:39.896544    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:39.896555    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:39.932702    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:39.932713    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:39.948052    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:39.948062    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:42.464408    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:47.466700    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:47.466883    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:47.478839    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:47.478920    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:47.490810    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:47.490885    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:47.502390    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:45:47.502471    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:47.513319    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:47.513408    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:47.524354    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:47.524431    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:47.535375    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:47.535449    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:47.546448    4051 logs.go:282] 0 containers: []
	W1014 07:45:47.546468    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:47.546529    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:47.558348    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:47.558368    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:47.558375    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:47.576731    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:47.576746    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:47.591814    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:47.591824    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:47.606414    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:47.606425    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:47.622217    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:45:47.622227    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:45:47.634690    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:47.634706    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:47.646705    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:47.646715    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:47.683035    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:45:47.683045    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:45:47.698002    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:47.698015    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:47.710858    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:47.710868    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:47.729283    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:47.729293    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:47.742214    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:47.742223    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:47.778795    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:47.778805    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:47.783612    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:47.783620    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:47.795838    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:47.795851    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:50.320701    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:55.322924    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:55.323088    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:55.337256    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:55.337347    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:55.348267    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:55.348344    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:55.358766    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:45:55.358838    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:55.369086    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:55.369169    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:55.379448    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:55.379524    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:55.392976    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:55.393057    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:55.403293    4051 logs.go:282] 0 containers: []
	W1014 07:45:55.403305    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:55.403375    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:55.414107    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:55.414126    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:55.414132    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:55.433170    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:55.433180    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:55.467812    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:55.467830    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:55.503425    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:45:55.503440    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:45:55.516226    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:55.516239    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:55.531741    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:55.531751    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:55.543781    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:55.543791    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:55.548232    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:55.548240    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:55.562688    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:55.562698    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:55.574858    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:55.574868    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:55.586469    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:55.586480    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:55.600401    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:45:55.600412    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:45:55.615960    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:55.615971    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:55.640229    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:55.640239    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:55.652567    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:55.652577    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:58.166635    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:03.168819    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:03.168979    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:03.183721    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:03.183802    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:03.196107    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:03.196186    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:03.206977    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:03.207054    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:03.217237    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:03.217316    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:03.227485    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:03.227562    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:03.238683    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:03.238753    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:03.250080    4051 logs.go:282] 0 containers: []
	W1014 07:46:03.250094    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:03.250165    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:03.262040    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:03.262056    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:03.262062    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:03.276223    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:03.276236    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:03.287721    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:03.287733    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:03.320834    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:03.320842    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:03.356367    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:03.356378    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:03.374217    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:03.374227    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:03.393326    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:03.393339    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:03.408171    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:03.408182    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:03.420049    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:03.420059    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:03.443124    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:03.443131    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:03.454574    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:03.454584    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:03.468932    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:03.468942    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:03.481903    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:03.481913    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:03.494039    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:03.494052    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:03.498814    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:03.498820    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:06.012705    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:11.015010    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:11.015154    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:11.026701    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:11.026789    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:11.037559    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:11.037635    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:11.048142    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:11.048226    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:11.059315    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:11.059390    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:11.069604    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:11.069688    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:11.079998    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:11.080072    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:11.089703    4051 logs.go:282] 0 containers: []
	W1014 07:46:11.089719    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:11.089780    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:11.100671    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:11.100688    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:11.100694    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:11.136494    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:11.136501    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:11.152407    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:11.152421    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:11.177097    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:11.177109    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:11.188821    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:11.188832    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:11.201361    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:11.201375    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:11.219761    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:11.219778    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:11.231316    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:11.231326    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:11.235853    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:11.235858    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:11.270498    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:11.270508    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:11.296665    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:11.296675    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:11.312098    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:11.312108    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:11.326199    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:11.326209    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:11.340061    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:11.340071    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:11.352133    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:11.352143    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:13.871166    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:18.873300    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:18.873475    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:18.885179    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:18.885260    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:18.900860    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:18.900941    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:18.911794    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:18.911873    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:18.922344    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:18.922430    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:18.933004    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:18.933084    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:18.943504    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:18.943579    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:18.954098    4051 logs.go:282] 0 containers: []
	W1014 07:46:18.954109    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:18.954169    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:18.964839    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:18.964855    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:18.964861    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:18.977130    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:18.977143    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:18.989006    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:18.989018    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:19.006519    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:19.006529    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:19.018297    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:19.018309    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:19.044566    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:19.044576    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:19.049315    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:19.049323    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:19.084628    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:19.084642    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:19.096302    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:19.096313    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:19.107652    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:19.107666    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:19.124444    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:19.124457    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:19.136399    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:19.136411    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:19.148884    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:19.148894    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:19.185258    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:19.185279    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:19.200030    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:19.200040    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:21.717289    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:26.719451    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:26.719593    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:26.732406    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:26.732495    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:26.743417    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:26.743499    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:26.758556    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:26.758644    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:26.770222    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:26.770303    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:26.781229    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:26.781309    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:26.795565    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:26.795646    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:26.806277    4051 logs.go:282] 0 containers: []
	W1014 07:46:26.806291    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:26.806361    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:26.816898    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:26.816916    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:26.816921    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:26.827995    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:26.828007    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:26.846307    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:26.846316    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:26.858374    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:26.858385    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:26.870025    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:26.870037    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:26.874642    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:26.874649    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:26.888863    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:26.888874    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:26.900505    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:26.900515    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:26.915973    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:26.915983    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:26.940775    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:26.940782    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:26.976769    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:26.976780    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:26.988419    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:26.988429    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:27.003602    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:27.003614    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:27.020661    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:27.020672    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:27.036252    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:27.036263    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:29.573468    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:34.575647    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:34.575811    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:34.586652    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:34.586734    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:34.600370    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:34.600449    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:34.610708    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:34.610783    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:34.621062    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:34.621136    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:34.631668    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:34.631751    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:34.642533    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:34.642616    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:34.652432    4051 logs.go:282] 0 containers: []
	W1014 07:46:34.652445    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:34.652510    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:34.663068    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:34.663083    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:34.663089    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:34.674684    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:34.674696    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:34.708444    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:34.708455    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:34.726485    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:34.726496    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:34.740890    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:34.740899    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:34.754274    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:34.754288    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:34.766312    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:34.766322    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:34.783923    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:34.783935    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:34.812462    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:34.812480    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:34.817186    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:34.817196    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:34.853942    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:34.853953    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:34.865744    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:34.865756    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:34.877954    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:34.877965    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:34.893552    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:34.893562    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:34.906149    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:34.906161    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:37.420128    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:42.421848    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:42.422072    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:42.436142    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:42.436231    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:42.451551    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:42.451629    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:42.462451    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:42.462525    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:42.473994    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:42.474066    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:42.487374    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:42.487454    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:42.498187    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:42.498269    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:42.508973    4051 logs.go:282] 0 containers: []
	W1014 07:46:42.508987    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:42.509054    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:42.520326    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:42.520347    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:42.520353    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:42.536679    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:42.536689    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:42.548456    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:42.548468    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:42.573595    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:42.573606    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:42.609010    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:42.609022    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:42.613969    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:42.613976    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:42.629244    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:42.629257    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:42.644897    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:42.644907    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:42.656652    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:42.656661    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:42.691814    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:42.691827    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:42.705179    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:42.705190    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:42.717482    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:42.717497    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:42.729519    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:42.729529    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:42.745399    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:42.745410    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:42.757717    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:42.757727    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:45.278276    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:50.280544    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:50.280690    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:50.291709    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:50.291785    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:50.302236    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:50.302305    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:50.313183    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:50.313266    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:50.327492    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:50.327579    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:50.338366    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:50.338444    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:50.349244    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:50.349325    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:50.359482    4051 logs.go:282] 0 containers: []
	W1014 07:46:50.359496    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:50.359564    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:50.370305    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:50.370321    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:50.370326    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:50.406722    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:50.406755    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:50.419033    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:50.419046    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:50.430956    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:50.430966    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:50.442518    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:50.442530    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:50.460534    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:50.460547    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:50.472344    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:50.472354    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:50.497278    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:50.497287    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:50.511308    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:50.511324    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:50.528340    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:50.528350    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:50.543452    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:50.543463    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:50.547822    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:50.547830    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:50.583466    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:50.583477    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:50.598267    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:50.598277    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:50.609810    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:50.609821    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:53.123774    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:58.125937    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:58.126050    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:58.136897    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:58.136980    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:58.146965    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:58.147054    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:58.157562    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:58.157649    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:58.168661    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:58.168740    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:58.179544    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:58.179618    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:58.190397    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:58.190468    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:58.200579    4051 logs.go:282] 0 containers: []
	W1014 07:46:58.200597    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:58.200663    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:58.212087    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:58.212104    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:58.212109    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:58.223651    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:58.223660    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:58.244707    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:58.244717    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:58.256346    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:58.256357    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:58.281865    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:58.281876    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:58.296671    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:58.296680    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:58.308926    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:58.308944    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:58.320576    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:58.320589    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:58.336331    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:58.336343    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:58.348397    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:58.348408    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:58.383835    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:58.383845    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:58.397621    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:58.397632    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:58.409540    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:58.409553    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:58.445337    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:58.445348    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:58.450327    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:58.450333    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:00.968108    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:05.970240    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:05.970346    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:05.981555    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:47:05.981650    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:05.992318    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:47:05.992404    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:06.002964    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:47:06.003046    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:06.013720    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:47:06.013798    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:06.023931    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:47:06.024010    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:06.034704    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:47:06.034778    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:06.045267    4051 logs.go:282] 0 containers: []
	W1014 07:47:06.045277    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:06.045341    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:06.055808    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:47:06.055825    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:47:06.055831    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:47:06.067241    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:06.067251    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:06.071635    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:06.071644    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:06.108492    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:47:06.108503    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:47:06.122391    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:47:06.122401    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:47:06.139642    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:47:06.139654    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:06.153779    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:06.153789    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:06.189339    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:47:06.189348    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:47:06.204092    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:47:06.204102    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:47:06.220008    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:47:06.220018    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:47:06.231516    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:47:06.231526    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:47:06.247252    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:47:06.247263    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:06.261856    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:47:06.261865    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:47:06.274127    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:47:06.274138    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:47:06.289733    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:06.289744    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:08.817148    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:13.819372    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:13.819475    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:13.830514    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:47:13.830597    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:13.841134    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:47:13.841210    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:13.851849    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:47:13.851941    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:13.862736    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:47:13.862814    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:13.873153    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:47:13.873232    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:13.883973    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:47:13.884058    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:13.894026    4051 logs.go:282] 0 containers: []
	W1014 07:47:13.894037    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:13.894101    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:13.905153    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:47:13.905170    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:47:13.905176    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:47:13.917558    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:47:13.917569    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:47:13.933658    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:47:13.933669    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:47:13.948114    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:47:13.948126    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:47:13.965786    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:13.965796    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:14.002243    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:14.002257    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:14.006712    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:14.006719    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:14.031552    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:47:14.031571    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:14.043480    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:47:14.043493    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:47:14.055400    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:47:14.055411    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:14.070815    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:14.070825    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:14.109952    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:47:14.109965    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:47:14.126490    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:47:14.126501    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:47:14.141575    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:47:14.141584    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:47:14.153442    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:47:14.153452    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:47:16.667075    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:21.669188    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:21.669295    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:21.680609    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:47:21.680679    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:21.691145    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:47:21.691254    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:21.702092    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:47:21.702171    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:21.712199    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:47:21.712282    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:21.722772    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:47:21.722877    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:21.733317    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:47:21.733392    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:21.744009    4051 logs.go:282] 0 containers: []
	W1014 07:47:21.744021    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:21.744089    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:21.757299    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:47:21.757315    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:47:21.757320    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:21.772811    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:47:21.772823    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:47:21.790359    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:21.790371    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:21.815870    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:47:21.815880    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:47:21.830102    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:47:21.830112    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:47:21.842196    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:47:21.842209    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:47:21.854353    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:47:21.854363    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:47:21.866836    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:47:21.866849    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:47:21.881317    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:47:21.881328    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:47:21.897177    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:47:21.897187    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:47:21.909018    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:47:21.909033    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:21.920576    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:21.920591    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:21.955946    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:21.955957    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:21.961131    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:47:21.961139    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:47:21.977614    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:21.977626    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:24.513458    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:29.515624    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:29.515777    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:29.527144    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:47:29.527224    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:29.538284    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:47:29.538367    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:29.551663    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:47:29.551743    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:29.562264    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:47:29.562341    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:29.572863    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:47:29.572941    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:29.590261    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:47:29.590341    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:29.600535    4051 logs.go:282] 0 containers: []
	W1014 07:47:29.600547    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:29.600618    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:29.611510    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:47:29.611530    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:47:29.611535    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:47:29.623408    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:47:29.623419    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:47:29.634670    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:47:29.634680    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:47:29.646621    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:47:29.646632    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:47:29.658340    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:47:29.658350    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:47:29.675543    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:29.675556    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:29.698793    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:29.698800    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:29.732806    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:47:29.732819    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:29.748761    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:47:29.748771    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:47:29.763320    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:47:29.763330    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:47:29.775353    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:47:29.775364    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:47:29.787467    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:47:29.787478    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:29.800759    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:29.800769    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:29.836392    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:47:29.836400    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:47:29.851418    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:29.851430    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:32.358407    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:37.360730    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:37.360838    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:37.372334    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:47:37.372413    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:37.382685    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:47:37.382760    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:37.393459    4051 logs.go:282] 4 containers: [c3d009bc2ad8 468a0e63e316 09d3ed4d75e8 fbe909541ee8]
	I1014 07:47:37.393539    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:37.404715    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:47:37.404795    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:37.415345    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:47:37.415419    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:37.425999    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:47:37.426071    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:37.436276    4051 logs.go:282] 0 containers: []
	W1014 07:47:37.436288    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:37.436356    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:37.446409    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:47:37.446428    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:37.446434    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:37.482003    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:47:37.482014    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:47:37.493950    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:47:37.493961    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:47:37.507908    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:47:37.507918    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:47:37.520187    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:47:37.520197    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:47:37.531751    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:47:37.531765    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:47:37.543219    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:47:37.543230    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:47:37.565095    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:37.565105    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:37.589945    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:47:37.589960    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:37.604280    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:37.604294    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:37.608722    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:37.608729    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:37.645511    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:47:37.645524    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:47:37.660237    4051 logs.go:123] Gathering logs for coredns [c3d009bc2ad8] ...
	I1014 07:47:37.660248    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d009bc2ad8"
	I1014 07:47:37.671660    4051 logs.go:123] Gathering logs for coredns [468a0e63e316] ...
	I1014 07:47:37.671672    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468a0e63e316"
	I1014 07:47:37.683198    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:47:37.683210    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:40.205371    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:45.207577    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:45.211970    4051 out.go:201] 
	W1014 07:47:45.216000    4051 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1014 07:47:45.216006    4051 out.go:270] * 
	* 
	W1014 07:47:45.216662    4051 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:47:45.226868    4051 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-116000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-14 07:47:45.312434 -0700 PDT m=+4187.627495626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-116000 -n running-upgrade-116000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-116000 -n running-upgrade-116000: exit status 2 (15.663256625s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-116000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-513000 sudo                                | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo                                | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo cat                            | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo cat                            | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo                                | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo                                | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo                                | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo cat                            | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo cat                            | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo                                | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo                                | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo                                | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo find                           | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-513000 sudo crio                           | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-513000                                     | cilium-513000             | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	| start   | -p kubernetes-upgrade-491000                         | kubernetes-upgrade-491000 | jenkins | v1.34.0 | 14 Oct 24 07:37 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-533000                             | offline-docker-533000     | jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	| stop    | -p kubernetes-upgrade-491000                         | kubernetes-upgrade-491000 | jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	| start   | -p stopped-upgrade-496000                            | minikube                  | jenkins | v1.26.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:39 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-491000                         | kubernetes-upgrade-491000 | jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-491000                         | kubernetes-upgrade-491000 | jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	| start   | -p running-upgrade-116000                            | minikube                  | jenkins | v1.26.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:39 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| start   | -p running-upgrade-116000                            | running-upgrade-116000    | jenkins | v1.34.0 | 14 Oct 24 07:39 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-496000 stop                          | minikube                  | jenkins | v1.26.0 | 14 Oct 24 07:39 PDT | 14 Oct 24 07:39 PDT |
	| start   | -p stopped-upgrade-496000                            | stopped-upgrade-496000    | jenkins | v1.34.0 | 14 Oct 24 07:39 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 07:39:44
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 07:39:44.024411    4105 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:39:44.024795    4105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:39:44.024799    4105 out.go:358] Setting ErrFile to fd 2...
	I1014 07:39:44.024802    4105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:39:44.024933    4105 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:39:44.026163    4105 out.go:352] Setting JSON to false
	I1014 07:39:44.046735    4105 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4154,"bootTime":1728912630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:39:44.046835    4105 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:39:44.051325    4105 out.go:177] * [stopped-upgrade-496000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:39:44.059203    4105 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:39:44.059289    4105 notify.go:220] Checking for updates...
	I1014 07:39:44.067139    4105 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:39:44.070176    4105 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:39:44.073179    4105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:39:44.076215    4105 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:39:44.079235    4105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:39:44.082487    4105 config.go:182] Loaded profile config "stopped-upgrade-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1014 07:39:44.086123    4105 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 07:39:44.089144    4105 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:39:44.092122    4105 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:39:44.099204    4105 start.go:297] selected driver: qemu2
	I1014 07:39:44.099210    4105 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1014 07:39:44.099274    4105 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:39:44.102108    4105 cni.go:84] Creating CNI manager for ""
	I1014 07:39:44.102146    4105 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:39:44.102179    4105 start.go:340] cluster config:
	{Name:stopped-upgrade-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1014 07:39:44.102238    4105 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:39:44.109249    4105 out.go:177] * Starting "stopped-upgrade-496000" primary control-plane node in "stopped-upgrade-496000" cluster
	I1014 07:39:44.113184    4105 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1014 07:39:44.113199    4105 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1014 07:39:44.113204    4105 cache.go:56] Caching tarball of preloaded images
	I1014 07:39:44.113279    4105 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:39:44.113285    4105 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1014 07:39:44.113338    4105 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/config.json ...
	I1014 07:39:44.113796    4105 start.go:360] acquireMachinesLock for stopped-upgrade-496000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:39:44.113845    4105 start.go:364] duration metric: took 43.209µs to acquireMachinesLock for "stopped-upgrade-496000"
	I1014 07:39:44.113856    4105 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:39:44.113861    4105 fix.go:54] fixHost starting: 
	I1014 07:39:44.113988    4105 fix.go:112] recreateIfNeeded on stopped-upgrade-496000: state=Stopped err=<nil>
	W1014 07:39:44.113996    4105 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:39:44.122163    4105 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-496000" ...
	I1014 07:39:40.646540    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:39:40.646639    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:39:44.126186    4105 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:39:44.126291    4105 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/qemu.pid -nic user,model=virtio,hostfwd=tcp::61428-:22,hostfwd=tcp::61429-:2376,hostname=stopped-upgrade-496000 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/disk.qcow2
	I1014 07:39:44.173881    4105 main.go:141] libmachine: STDOUT: 
	I1014 07:39:44.173914    4105 main.go:141] libmachine: STDERR: 
	I1014 07:39:44.173919    4105 main.go:141] libmachine: Waiting for VM to start (ssh -p 61428 docker@127.0.0.1)...
	I1014 07:39:45.647650    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:39:45.647736    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:39:50.648599    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:39:50.648630    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:39:55.649306    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:39:55.649335    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:03.367095    4105 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/config.json ...
	I1014 07:40:03.367607    4105 machine.go:93] provisionDockerMachine start ...
	I1014 07:40:03.367724    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:03.367991    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:03.368000    4105 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:40:03.443237    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:40:03.443257    4105 buildroot.go:166] provisioning hostname "stopped-upgrade-496000"
	I1014 07:40:03.443364    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:03.443558    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:03.443569    4105 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-496000 && echo "stopped-upgrade-496000" | sudo tee /etc/hostname
	I1014 07:40:03.516439    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-496000
	
	I1014 07:40:03.516529    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:03.516674    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:03.516684    4105 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-496000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-496000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-496000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:40:03.584409    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:40:03.584421    4105 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19790-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19790-979/.minikube}
	I1014 07:40:03.584431    4105 buildroot.go:174] setting up certificates
	I1014 07:40:03.584435    4105 provision.go:84] configureAuth start
	I1014 07:40:03.584438    4105 provision.go:143] copyHostCerts
	I1014 07:40:03.584521    4105 exec_runner.go:144] found /Users/jenkins/minikube-integration/19790-979/.minikube/ca.pem, removing ...
	I1014 07:40:03.584528    4105 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19790-979/.minikube/ca.pem
	I1014 07:40:03.584636    4105 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19790-979/.minikube/ca.pem (1078 bytes)
	I1014 07:40:03.584839    4105 exec_runner.go:144] found /Users/jenkins/minikube-integration/19790-979/.minikube/cert.pem, removing ...
	I1014 07:40:03.584844    4105 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19790-979/.minikube/cert.pem
	I1014 07:40:03.584905    4105 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19790-979/.minikube/cert.pem (1123 bytes)
	I1014 07:40:03.585026    4105 exec_runner.go:144] found /Users/jenkins/minikube-integration/19790-979/.minikube/key.pem, removing ...
	I1014 07:40:03.585030    4105 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19790-979/.minikube/key.pem
	I1014 07:40:03.585083    4105 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19790-979/.minikube/key.pem (1675 bytes)
	I1014 07:40:03.585171    4105 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19790-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-496000 san=[127.0.0.1 localhost minikube stopped-upgrade-496000]
	I1014 07:40:03.878183    4105 provision.go:177] copyRemoteCerts
	I1014 07:40:03.878258    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:40:03.878269    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	I1014 07:40:03.910736    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 07:40:03.917778    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 07:40:03.925077    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:40:03.932419    4105 provision.go:87] duration metric: took 347.982333ms to configureAuth
	I1014 07:40:03.932429    4105 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:40:03.932526    4105 config.go:182] Loaded profile config "stopped-upgrade-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1014 07:40:03.932572    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:03.932655    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:03.932660    4105 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:40:03.994501    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:40:03.994510    4105 buildroot.go:70] root file system type: tmpfs
	I1014 07:40:03.994560    4105 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:40:03.994617    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:03.994736    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:03.994770    4105 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:40:00.650739    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:00.650829    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:04.058226    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:40:04.058284    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:04.058379    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:04.058388    4105 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:40:04.433815    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:40:04.433828    4105 machine.go:96] duration metric: took 1.066236875s to provisionDockerMachine
	I1014 07:40:04.433835    4105 start.go:293] postStartSetup for "stopped-upgrade-496000" (driver="qemu2")
	I1014 07:40:04.433842    4105 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:40:04.433920    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:40:04.433929    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	I1014 07:40:04.468119    4105 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:40:04.469443    4105 info.go:137] Remote host: Buildroot 2021.02.12
	I1014 07:40:04.469450    4105 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19790-979/.minikube/addons for local assets ...
	I1014 07:40:04.469535    4105 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19790-979/.minikube/files for local assets ...
	I1014 07:40:04.469678    4105 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem -> 14972.pem in /etc/ssl/certs
	I1014 07:40:04.469843    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:40:04.472823    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem --> /etc/ssl/certs/14972.pem (1708 bytes)
	I1014 07:40:04.480246    4105 start.go:296] duration metric: took 46.406083ms for postStartSetup
	I1014 07:40:04.480259    4105 fix.go:56] duration metric: took 20.366856584s for fixHost
	I1014 07:40:04.480301    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:04.480407    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:04.480411    4105 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:40:04.537993    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728916804.544200337
	
	I1014 07:40:04.538002    4105 fix.go:216] guest clock: 1728916804.544200337
	I1014 07:40:04.538006    4105 fix.go:229] Guest: 2024-10-14 07:40:04.544200337 -0700 PDT Remote: 2024-10-14 07:40:04.480261 -0700 PDT m=+20.488572085 (delta=63.939337ms)
	I1014 07:40:04.538021    4105 fix.go:200] guest clock delta is within tolerance: 63.939337ms
	I1014 07:40:04.538024    4105 start.go:83] releasing machines lock for "stopped-upgrade-496000", held for 20.424632834s
	I1014 07:40:04.538103    4105 ssh_runner.go:195] Run: cat /version.json
	I1014 07:40:04.538114    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	I1014 07:40:04.538103    4105 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 07:40:04.538150    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	W1014 07:40:04.538763    4105 sshutil.go:64] dial failure (will retry): dial tcp [::1]:61428: connect: connection refused
	I1014 07:40:04.538777    4105 retry.go:31] will retry after 258.099859ms: dial tcp [::1]:61428: connect: connection refused
	W1014 07:40:04.569895    4105 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1014 07:40:04.569944    4105 ssh_runner.go:195] Run: systemctl --version
	I1014 07:40:04.571708    4105 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:40:04.573345    4105 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:40:04.573379    4105 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1014 07:40:04.576547    4105 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1014 07:40:04.581400    4105 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:40:04.581408    4105 start.go:495] detecting cgroup driver to use...
	I1014 07:40:04.581490    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:40:04.588737    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1014 07:40:04.592344    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:40:04.595728    4105 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:40:04.595761    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:40:04.599623    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:40:04.602702    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:40:04.605767    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:40:04.609019    4105 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:40:04.612628    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:40:04.616434    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:40:04.620282    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:40:04.623751    4105 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:40:04.627259    4105 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:40:04.630101    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:04.712974    4105 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:40:04.720000    4105 start.go:495] detecting cgroup driver to use...
	I1014 07:40:04.720089    4105 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:40:04.728599    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:40:04.736763    4105 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:40:04.743531    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:40:04.748127    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:40:04.752829    4105 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:40:04.792608    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:40:04.797883    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:40:04.804199    4105 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:40:04.806244    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:40:04.809419    4105 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1014 07:40:04.815202    4105 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:40:04.891530    4105 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:40:04.973687    4105 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:40:04.973757    4105 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:40:04.978989    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:05.053383    4105 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:40:06.198264    4105 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.144890125s)
	I1014 07:40:06.198355    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:40:06.207468    4105 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1014 07:40:06.213345    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:40:06.218176    4105 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:40:06.294831    4105 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:40:06.370854    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:06.445873    4105 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:40:06.452227    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:40:06.457184    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:06.534646    4105 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:40:06.572105    4105 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:40:06.572199    4105 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:40:06.574316    4105 start.go:563] Will wait 60s for crictl version
	I1014 07:40:06.574381    4105 ssh_runner.go:195] Run: which crictl
	I1014 07:40:06.575761    4105 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:40:06.591820    4105 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1014 07:40:06.591906    4105 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:40:06.608872    4105 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:40:06.629639    4105 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1014 07:40:06.629725    4105 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1014 07:40:06.631007    4105 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:40:06.634559    4105 kubeadm.go:883] updating cluster {Name:stopped-upgrade-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1014 07:40:06.634614    4105 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1014 07:40:06.634665    4105 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:40:06.644816    4105 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:40:06.644832    4105 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1014 07:40:06.644901    4105 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:40:06.648417    4105 ssh_runner.go:195] Run: which lz4
	I1014 07:40:06.649586    4105 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:40:06.650894    4105 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:40:06.650904    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1014 07:40:07.637405    4105 docker.go:653] duration metric: took 987.872958ms to copy over tarball
	I1014 07:40:07.637483    4105 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:40:08.801841    4105 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.164368166s)
	I1014 07:40:08.801860    4105 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:40:08.818038    4105 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:40:08.821180    4105 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1014 07:40:08.826548    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:08.896052    4105 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:40:05.653341    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:05.653376    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:10.394121    4105 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.49808275s)
	I1014 07:40:10.394245    4105 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:40:10.408937    4105 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:40:10.408948    4105 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1014 07:40:10.408954    4105 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 07:40:10.415106    4105 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:40:10.417246    4105 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:40:10.419511    4105 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:40:10.419544    4105 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:40:10.421458    4105 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:40:10.421485    4105 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:40:10.422705    4105 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:40:10.422729    4105 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1014 07:40:10.424708    4105 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:40:10.424743    4105 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:40:10.425919    4105 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1014 07:40:10.426093    4105 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1014 07:40:10.427464    4105 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:40:10.427472    4105 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:40:10.428333    4105 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1014 07:40:10.429582    4105 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:40:10.989878    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:40:10.996822    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:40:11.001436    4105 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1014 07:40:11.001469    4105 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:40:11.001528    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:40:11.009654    4105 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1014 07:40:11.009677    4105 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:40:11.009724    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:40:11.011508    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:40:11.019242    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1014 07:40:11.025637    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1014 07:40:11.031114    4105 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1014 07:40:11.031140    4105 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:40:11.031198    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:40:11.041603    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1014 07:40:11.077320    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1014 07:40:11.088288    4105 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1014 07:40:11.088308    4105 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1014 07:40:11.088370    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1014 07:40:11.098905    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1014 07:40:11.099051    4105 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1014 07:40:11.100599    4105 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1014 07:40:11.100611    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1014 07:40:11.113236    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:40:11.143710    4105 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1014 07:40:11.143741    4105 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:40:11.143808    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:40:11.174487    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1014 07:40:11.220125    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W1014 07:40:11.241317    4105 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1014 07:40:11.241490    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:40:11.267743    4105 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1014 07:40:11.267767    4105 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1014 07:40:11.267846    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1014 07:40:11.277941    4105 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1014 07:40:11.277975    4105 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:40:11.278046    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:40:11.299188    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1014 07:40:11.299341    4105 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1014 07:40:11.329423    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1014 07:40:11.329435    4105 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1014 07:40:11.329458    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1014 07:40:11.329587    4105 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1014 07:40:11.344410    4105 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1014 07:40:11.344436    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1014 07:40:11.368870    4105 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1014 07:40:11.368885    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W1014 07:40:11.381470    4105 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1014 07:40:11.381642    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:40:11.416669    4105 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1014 07:40:11.416689    4105 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1014 07:40:11.416695    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1014 07:40:11.421423    4105 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1014 07:40:11.421444    4105 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:40:11.421510    4105 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:40:11.574771    4105 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1014 07:40:11.574808    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1014 07:40:11.574811    4105 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1014 07:40:11.574843    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1014 07:40:11.574954    4105 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1014 07:40:11.619355    4105 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1014 07:40:11.619435    4105 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1014 07:40:11.619470    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1014 07:40:11.650366    4105 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1014 07:40:11.650380    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1014 07:40:11.887118    4105 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1014 07:40:11.887158    4105 cache_images.go:92] duration metric: took 1.478229708s to LoadCachedImages
	W1014 07:40:11.887198    4105 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1014 07:40:11.887203    4105 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1014 07:40:11.887264    4105 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-496000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:40:11.887338    4105 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:40:11.901844    4105 cni.go:84] Creating CNI manager for ""
	I1014 07:40:11.901855    4105 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:40:11.901862    4105 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:40:11.901875    4105 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-496000 NodeName:stopped-upgrade-496000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:40:11.901957    4105 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-496000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:40:11.902026    4105 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1014 07:40:11.904865    4105 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:40:11.904895    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 07:40:11.907798    4105 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1014 07:40:11.912815    4105 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:40:11.918067    4105 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1014 07:40:11.923614    4105 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1014 07:40:11.924885    4105 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:40:11.928504    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:12.008710    4105 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:40:12.014444    4105 certs.go:68] Setting up /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000 for IP: 10.0.2.15
	I1014 07:40:12.014452    4105 certs.go:194] generating shared ca certs ...
	I1014 07:40:12.014461    4105 certs.go:226] acquiring lock for ca certs: {Name:mk8f9f58f46caac35c7cea538c3ba1c75987d64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:40:12.014661    4105 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19790-979/.minikube/ca.key
	I1014 07:40:12.022831    4105 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19790-979/.minikube/proxy-client-ca.key
	I1014 07:40:12.022846    4105 certs.go:256] generating profile certs ...
	I1014 07:40:12.025923    4105 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/client.key
	I1014 07:40:12.025942    4105 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key.12644273
	I1014 07:40:12.025957    4105 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt.12644273 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1014 07:40:12.154397    4105 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt.12644273 ...
	I1014 07:40:12.154411    4105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt.12644273: {Name:mkc366bf23829c486d581f5bceceede0ef407704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:40:12.155028    4105 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key.12644273 ...
	I1014 07:40:12.155034    4105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key.12644273: {Name:mkcbebf3d6840e9e2ea115c6f567cb363f7a5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:40:12.156534    4105 certs.go:381] copying /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt.12644273 -> /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt
	I1014 07:40:12.156688    4105 certs.go:385] copying /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key.12644273 -> /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key
	I1014 07:40:12.160104    4105 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/proxy-client.key
	I1014 07:40:12.160270    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/1497.pem (1338 bytes)
	W1014 07:40:12.160463    4105 certs.go:480] ignoring /Users/jenkins/minikube-integration/19790-979/.minikube/certs/1497_empty.pem, impossibly tiny 0 bytes
	I1014 07:40:12.160471    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 07:40:12.160518    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem (1078 bytes)
	I1014 07:40:12.160552    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem (1123 bytes)
	I1014 07:40:12.160583    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/key.pem (1675 bytes)
	I1014 07:40:12.160662    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem (1708 bytes)
	I1014 07:40:12.161037    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:40:12.168628    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 07:40:12.176132    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:40:12.183216    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:40:12.190049    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 07:40:12.196839    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:40:12.204291    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:40:12.211700    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 07:40:12.218042    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:40:12.224547    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/certs/1497.pem --> /usr/share/ca-certificates/1497.pem (1338 bytes)
	I1014 07:40:12.231875    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem --> /usr/share/ca-certificates/14972.pem (1708 bytes)
	I1014 07:40:12.238977    4105 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:40:12.243938    4105 ssh_runner.go:195] Run: openssl version
	I1014 07:40:12.245901    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:40:12.249171    4105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:40:12.250583    4105 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:40:12.250616    4105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:40:12.252279    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:40:12.255288    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1497.pem && ln -fs /usr/share/ca-certificates/1497.pem /etc/ssl/certs/1497.pem"
	I1014 07:40:12.258101    4105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1497.pem
	I1014 07:40:12.259439    4105 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:46 /usr/share/ca-certificates/1497.pem
	I1014 07:40:12.259468    4105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1497.pem
	I1014 07:40:12.261278    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1497.pem /etc/ssl/certs/51391683.0"
	I1014 07:40:12.264581    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14972.pem && ln -fs /usr/share/ca-certificates/14972.pem /etc/ssl/certs/14972.pem"
	I1014 07:40:12.268045    4105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14972.pem
	I1014 07:40:12.269460    4105 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:46 /usr/share/ca-certificates/14972.pem
	I1014 07:40:12.269485    4105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14972.pem
	I1014 07:40:12.271237    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14972.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:40:12.274069    4105 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:40:12.275482    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 07:40:12.277613    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 07:40:12.279660    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 07:40:12.281579    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 07:40:12.283391    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 07:40:12.285154    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 07:40:12.287153    4105 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1014 07:40:12.287224    4105 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:40:12.299953    4105 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:40:12.303272    4105 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 07:40:12.303282    4105 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 07:40:12.303311    4105 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 07:40:12.306082    4105 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:40:12.306566    4105 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-496000" does not appear in /Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:40:12.306670    4105 kubeconfig.go:62] /Users/jenkins/minikube-integration/19790-979/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-496000" cluster setting kubeconfig missing "stopped-upgrade-496000" context setting]
	I1014 07:40:12.306875    4105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/kubeconfig: {Name:mkbe79fce3a1d9ddd6036a978e097f20767985b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:40:12.307326    4105 kapi.go:59] client config for stopped-upgrade-496000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/client.key", CAFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1064e6e40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:40:12.307772    4105 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 07:40:12.310392    4105 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-496000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1014 07:40:12.310396    4105 kubeadm.go:1160] stopping kube-system containers ...
	I1014 07:40:12.310445    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:40:12.320962    4105 docker.go:483] Stopping containers: [01fe0352d451 88a3564ca66c ef8f73ba51dc 75b8f83bcedd d8ecc7085555 49cd8b0e5006 5c35a795ce9a 3a8b6183f21a]
	I1014 07:40:12.321055    4105 ssh_runner.go:195] Run: docker stop 01fe0352d451 88a3564ca66c ef8f73ba51dc 75b8f83bcedd d8ecc7085555 49cd8b0e5006 5c35a795ce9a 3a8b6183f21a
	I1014 07:40:12.332222    4105 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 07:40:12.338018    4105 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:40:12.341013    4105 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:40:12.341020    4105 kubeadm.go:157] found existing configuration files:
	
	I1014 07:40:12.341053    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/admin.conf
	I1014 07:40:12.344155    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:40:12.344194    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:40:12.346967    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/kubelet.conf
	I1014 07:40:12.349399    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:40:12.349451    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:40:12.352685    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/controller-manager.conf
	I1014 07:40:12.355643    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:40:12.355676    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:40:12.358297    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/scheduler.conf
	I1014 07:40:12.361009    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:40:12.361036    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:40:12.364134    4105 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:40:12.366862    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:40:12.390378    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:40:12.786516    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:40:12.915557    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:40:12.947099    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:40:12.969063    4105 api_server.go:52] waiting for apiserver process to appear ...
	I1014 07:40:12.969155    4105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:40:13.471516    4105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:40:13.971248    4105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:40:13.975566    4105 api_server.go:72] duration metric: took 1.006525625s to wait for apiserver process to appear ...
	I1014 07:40:13.975577    4105 api_server.go:88] waiting for apiserver healthz status ...
	I1014 07:40:13.975592    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:10.653571    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:10.653590    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:18.977573    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:18.977625    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:15.653953    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:15.654072    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:23.978080    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:23.978127    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:20.656551    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:20.656590    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:28.978661    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:28.978752    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:25.658826    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:25.658905    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:33.979890    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:33.979930    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:30.661425    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:30.661465    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:38.980892    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:38.980984    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:35.663625    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:35.663970    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:40:35.692404    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:40:35.692549    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:40:35.713619    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:40:35.713724    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:40:35.727709    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:40:35.727792    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:40:35.743175    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:40:35.743255    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:40:35.754551    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:40:35.754631    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:40:35.765542    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:40:35.765636    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:40:35.777630    4051 logs.go:282] 0 containers: []
	W1014 07:40:35.777641    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:40:35.777711    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:40:35.788984    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:40:35.789014    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:40:35.789020    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:40:35.800560    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:40:35.800572    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:40:35.827181    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:40:35.827188    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:40:35.841461    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:40:35.841472    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:40:35.881725    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:40:35.881733    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:40:35.894935    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:40:35.894946    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:40:35.906757    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:40:35.906767    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:40:35.918247    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:40:35.918262    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:40:35.932518    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:40:35.932531    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:40:35.946137    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:40:35.946148    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:40:35.957968    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:40:35.957977    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:40:35.976700    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:40:35.976719    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:40:35.988511    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:40:35.988522    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:40:36.000233    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:40:36.000247    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:40:36.004799    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:40:36.004808    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:40:36.112629    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:40:36.112642    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:40:36.126868    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:40:36.126885    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:40:38.640705    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:43.982534    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:43.982553    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:43.643422    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:43.643659    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:40:43.661444    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:40:43.661541    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:40:43.677547    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:40:43.677631    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:40:43.687976    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:40:43.688060    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:40:43.702571    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:40:43.702661    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:40:43.713555    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:40:43.713639    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:40:43.724549    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:40:43.724625    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:40:43.734997    4051 logs.go:282] 0 containers: []
	W1014 07:40:43.735007    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:40:43.735085    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:40:43.746071    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:40:43.746090    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:40:43.746095    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:40:43.757843    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:40:43.757854    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:40:43.770260    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:40:43.770275    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:40:43.799923    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:40:43.799933    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:40:43.811398    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:40:43.811409    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:40:43.838084    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:40:43.838096    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:40:43.850778    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:40:43.850788    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:40:43.862202    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:40:43.862214    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:40:43.876697    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:40:43.876708    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:40:43.881129    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:40:43.881137    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:40:43.918187    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:40:43.918197    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:40:43.932215    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:40:43.932225    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:40:43.944369    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:40:43.944381    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:40:43.955895    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:40:43.955907    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:40:43.998714    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:40:43.998722    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:40:44.010991    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:40:44.011001    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:40:44.032775    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:40:44.032786    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:40:48.983952    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:48.984002    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:46.546240    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:53.986191    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:53.986240    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:51.548629    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:51.548873    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:40:51.567849    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:40:51.567944    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:40:51.581566    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:40:51.581652    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:40:51.593178    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:40:51.593262    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:40:51.604316    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:40:51.604411    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:40:51.615092    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:40:51.615182    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:40:51.626239    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:40:51.626314    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:40:51.637013    4051 logs.go:282] 0 containers: []
	W1014 07:40:51.637025    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:40:51.637107    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:40:51.647619    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:40:51.647636    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:40:51.647641    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:40:51.661687    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:40:51.661701    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:40:51.675245    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:40:51.675258    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:40:51.686423    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:40:51.686434    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:40:51.698382    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:40:51.698393    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:40:51.709454    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:40:51.709470    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:40:51.720810    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:40:51.720823    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:40:51.761198    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:40:51.761206    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:40:51.772463    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:40:51.772477    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:40:51.784249    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:40:51.784259    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:40:51.811745    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:40:51.811752    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:40:51.816678    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:40:51.816686    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:40:51.854692    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:40:51.854702    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:40:51.878831    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:40:51.878844    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:40:51.896196    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:40:51.896207    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:40:51.910262    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:40:51.910275    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:40:51.922343    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:40:51.922354    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:40:54.435777    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:58.988429    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:58.988471    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:59.436441    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:59.436619    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:40:59.448475    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:40:59.448574    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:40:59.459180    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:40:59.459263    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:40:59.469685    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:40:59.469764    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:40:59.480206    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:40:59.480274    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:40:59.490960    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:40:59.491038    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:40:59.501217    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:40:59.501288    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:40:59.512256    4051 logs.go:282] 0 containers: []
	W1014 07:40:59.512269    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:40:59.512342    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:40:59.525532    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:40:59.525553    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:40:59.525561    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:40:59.529932    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:40:59.529941    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:40:59.565496    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:40:59.565509    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:40:59.581089    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:40:59.581106    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:40:59.592675    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:40:59.592687    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:40:59.608665    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:40:59.608676    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:40:59.621318    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:40:59.621330    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:40:59.632870    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:40:59.632885    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:40:59.644228    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:40:59.644245    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:40:59.655292    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:40:59.655304    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:40:59.666477    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:40:59.666490    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:40:59.683718    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:40:59.683732    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:40:59.710694    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:40:59.710701    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:40:59.723448    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:40:59.723459    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:40:59.767840    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:40:59.767860    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:40:59.781683    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:40:59.781692    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:40:59.796717    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:40:59.796732    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:03.989349    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:03.989394    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:02.310218    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:08.989627    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:08.989649    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:07.312613    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:07.312900    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:07.335657    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:07.335764    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:07.350881    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:07.350969    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:07.363249    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:07.363340    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:07.374404    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:07.374487    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:07.384930    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:07.385006    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:07.397232    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:07.397311    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:07.407575    4051 logs.go:282] 0 containers: []
	W1014 07:41:07.407586    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:07.407650    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:07.423949    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:07.423967    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:07.423972    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:07.438122    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:07.438135    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:07.451565    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:07.451576    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:07.467663    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:07.467674    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:07.496044    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:07.496058    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:07.508134    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:07.508148    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:07.549987    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:07.549997    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:07.585401    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:07.585414    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:07.597215    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:07.597228    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:07.608473    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:07.608490    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:07.625602    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:07.625614    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:07.630071    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:07.630079    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:07.641386    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:07.641400    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:07.652326    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:07.652339    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:07.666187    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:07.666200    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:07.677661    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:07.677674    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:07.689216    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:07.689232    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:10.202942    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:13.990934    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:13.991417    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:15.205266    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:15.205588    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:15.234028    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:15.234176    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:15.256245    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:15.256333    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:15.269346    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:15.269432    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:15.279939    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:15.280020    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:15.290568    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:15.290641    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:15.300952    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:15.301029    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:15.311070    4051 logs.go:282] 0 containers: []
	W1014 07:41:15.311082    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:15.311148    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:15.321499    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:15.321518    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:15.321523    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:15.335279    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:15.335289    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:15.346514    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:15.346529    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:15.358191    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:15.358199    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:15.375929    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:15.375940    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:15.388009    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:15.388020    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:15.414068    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:15.414081    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:15.454397    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:15.454405    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:15.459025    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:15.459034    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:15.494113    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:15.494126    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:15.508359    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:15.508374    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:15.519883    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:15.519896    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:15.531494    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:15.531505    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:14.026306    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:14.026511    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:14.046183    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:14.046309    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:14.060763    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:14.060851    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:14.073369    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:14.073450    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:14.084306    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:14.084441    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:14.095527    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:14.095619    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:14.105613    4105 logs.go:282] 0 containers: []
	W1014 07:41:14.105628    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:14.105695    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:14.116075    4105 logs.go:282] 0 containers: []
	W1014 07:41:14.116088    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:14.116095    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:14.116101    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:14.131194    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:14.131204    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:14.144051    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:14.144062    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:14.183469    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:14.183480    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:14.210159    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:14.210170    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:14.221600    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:14.221613    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:14.233924    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:14.233935    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:14.259047    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:14.259057    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:14.369959    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:14.369971    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:14.384058    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:14.384069    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:14.400607    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:14.400618    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:14.412119    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:14.412130    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:14.437168    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:14.437179    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:14.450994    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:14.451005    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:14.471090    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:14.471103    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:16.976296    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:15.545235    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:15.545247    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:15.556997    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:15.557010    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:15.568195    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:15.568208    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:15.580650    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:15.580663    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:18.093585    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:21.977022    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:21.977634    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:22.016286    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:22.016473    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:22.039594    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:22.039725    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:22.054993    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:22.055083    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:22.066853    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:22.066940    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:22.077810    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:22.077894    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:22.092061    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:22.092142    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:22.111131    4105 logs.go:282] 0 containers: []
	W1014 07:41:22.111144    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:22.111221    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:22.122378    4105 logs.go:282] 0 containers: []
	W1014 07:41:22.122391    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:22.122401    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:22.122406    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:22.138014    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:22.138025    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:22.150591    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:22.150602    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:22.174726    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:22.174734    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:22.211642    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:22.211652    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:22.225915    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:22.225926    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:22.237262    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:22.237274    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:22.252874    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:22.252886    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:22.270054    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:22.270063    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:22.284075    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:22.284085    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:22.288473    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:22.288482    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:22.320972    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:22.320983    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:22.335335    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:22.335346    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:22.346975    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:22.346989    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:22.385479    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:22.385494    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:23.095696    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:23.095876    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:23.119641    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:23.119734    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:23.131990    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:23.132072    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:23.155780    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:23.155866    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:23.166355    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:23.166442    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:23.176583    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:23.176652    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:23.187856    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:23.187932    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:23.197947    4051 logs.go:282] 0 containers: []
	W1014 07:41:23.197958    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:23.198034    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:23.208184    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:23.208202    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:23.208209    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:23.232340    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:23.232348    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:23.236482    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:23.236491    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:23.250237    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:23.250250    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:23.261412    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:23.261424    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:23.279737    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:23.279747    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:23.295325    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:23.295334    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:23.306452    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:23.306463    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:23.348178    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:23.348194    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:23.360070    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:23.360085    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:23.371734    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:23.371744    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:23.383183    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:23.383200    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:23.395299    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:23.395314    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:23.434282    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:23.434293    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:23.452809    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:23.452819    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:23.463775    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:23.463785    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:23.474922    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:23.474941    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:24.902094    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:25.988316    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:29.904668    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:29.904852    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:29.920346    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:29.920437    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:29.932024    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:29.932109    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:29.943140    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:29.943217    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:29.954145    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:29.954229    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:29.964353    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:29.964433    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:29.975225    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:29.975301    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:29.994312    4105 logs.go:282] 0 containers: []
	W1014 07:41:29.994325    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:29.994394    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:30.004882    4105 logs.go:282] 0 containers: []
	W1014 07:41:30.004893    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:30.004901    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:30.004907    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:30.044241    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:30.044249    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:30.048861    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:30.048867    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:30.062958    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:30.062968    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:30.075810    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:30.075820    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:30.092218    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:30.092227    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:30.118187    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:30.118196    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:30.155318    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:30.155335    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:30.173964    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:30.173978    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:30.199134    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:30.199146    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:30.212551    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:30.212562    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:30.224273    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:30.224288    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:30.241601    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:30.241611    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:30.254083    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:30.254098    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:30.267690    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:30.267700    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:32.781957    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:30.989709    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:30.989856    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:31.003335    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:31.003443    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:31.014741    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:31.014831    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:31.025440    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:31.025518    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:31.035788    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:31.035868    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:31.046162    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:31.046238    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:31.056555    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:31.056629    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:31.067349    4051 logs.go:282] 0 containers: []
	W1014 07:41:31.067362    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:31.067428    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:31.077525    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:31.077544    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:31.077552    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:31.114390    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:31.114401    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:31.128576    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:31.128588    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:31.146373    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:31.146382    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:31.150915    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:31.150922    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:31.168369    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:31.168381    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:31.212047    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:31.212058    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:31.226137    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:31.226148    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:31.242321    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:31.242332    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:31.253357    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:31.253368    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:31.279044    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:31.279054    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:31.291857    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:31.291866    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:31.303375    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:31.303385    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:31.314787    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:31.314797    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:31.326184    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:31.326195    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:31.337281    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:31.337294    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:31.349466    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:31.349480    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:33.862768    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:37.782766    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:37.782915    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:37.796845    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:37.796939    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:37.810935    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:37.811018    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:37.825992    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:37.826070    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:37.836212    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:37.836298    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:37.848660    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:37.848737    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:37.859268    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:37.859360    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:37.869395    4105 logs.go:282] 0 containers: []
	W1014 07:41:37.869406    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:37.869474    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:37.879527    4105 logs.go:282] 0 containers: []
	W1014 07:41:37.879539    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:37.879548    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:37.879554    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:37.918607    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:37.918616    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:37.922778    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:37.922783    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:37.948023    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:37.948035    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:37.984254    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:37.984265    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:37.998067    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:37.998081    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:38.009305    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:38.009316    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:38.023760    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:38.023772    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:38.042548    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:38.042562    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:38.053796    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:38.053807    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:38.078761    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:38.078777    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:38.092397    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:38.092410    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:38.116320    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:38.116330    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:38.127861    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:38.127872    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:38.145104    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:38.145114    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:38.865131    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:38.865349    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:38.881858    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:38.881947    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:38.893681    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:38.893765    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:38.904567    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:38.904641    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:38.914939    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:38.915009    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:38.925705    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:38.925770    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:38.935906    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:38.935972    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:38.946369    4051 logs.go:282] 0 containers: []
	W1014 07:41:38.946385    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:38.946480    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:38.957322    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:38.957339    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:38.957344    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:38.968989    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:38.968998    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:39.011734    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:39.011747    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:39.025486    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:39.025497    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:39.037259    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:39.037270    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:39.048738    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:39.048751    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:39.061016    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:39.061027    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:39.072794    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:39.072806    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:39.077445    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:39.077453    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:39.091458    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:39.091471    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:39.103448    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:39.103460    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:39.121366    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:39.121376    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:39.132329    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:39.132342    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:39.156473    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:39.156482    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:39.168338    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:39.168351    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:39.205302    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:39.205313    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:39.219900    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:39.219912    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:40.661238    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:41.733998    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:45.663773    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:45.664018    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:45.685544    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:45.685650    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:45.698484    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:45.698570    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:45.709536    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:45.709608    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:45.720515    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:45.720584    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:45.731057    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:45.731136    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:45.746521    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:45.746600    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:45.757243    4105 logs.go:282] 0 containers: []
	W1014 07:41:45.757255    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:45.757324    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:45.767853    4105 logs.go:282] 0 containers: []
	W1014 07:41:45.767872    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:45.767880    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:45.767885    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:45.781381    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:45.781392    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:45.797543    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:45.797554    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:45.812462    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:45.812473    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:45.824382    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:45.824393    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:45.838711    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:45.838722    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:45.862991    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:45.863000    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:45.866985    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:45.866999    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:45.898906    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:45.898917    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:45.938528    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:45.938537    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:45.952871    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:45.952883    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:45.967449    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:45.967459    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:45.984426    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:45.984436    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:45.996748    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:45.996759    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:46.038450    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:46.038462    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:48.558920    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:46.736381    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:46.736711    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:46.758951    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:46.759072    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:46.774316    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:46.774411    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:46.786954    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:46.787044    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:46.798877    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:46.798966    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:46.809683    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:46.809751    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:46.820821    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:46.820898    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:46.831009    4051 logs.go:282] 0 containers: []
	W1014 07:41:46.831019    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:46.831081    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:46.841253    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:46.841273    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:46.841278    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:46.853550    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:46.853562    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:46.865071    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:46.865081    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:46.878151    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:46.878165    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:46.893551    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:46.893562    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:46.905172    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:46.905182    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:46.916261    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:46.916273    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:46.956113    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:46.956123    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:46.960436    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:46.960442    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:47.002763    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:47.002773    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:47.014743    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:47.014753    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:47.031823    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:47.031833    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:47.057366    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:47.057373    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:47.068707    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:47.068721    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:47.084291    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:47.084305    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:47.096127    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:47.096139    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:47.107428    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:47.107439    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:49.620854    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:53.561236    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:53.561416    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:53.574805    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:53.574895    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:53.586281    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:53.586354    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:53.596577    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:53.596660    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:53.607420    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:53.607490    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:53.618134    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:53.618216    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:53.628293    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:53.628361    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:53.639200    4105 logs.go:282] 0 containers: []
	W1014 07:41:53.639215    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:53.639284    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:53.650022    4105 logs.go:282] 0 containers: []
	W1014 07:41:53.650031    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:53.650039    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:53.650044    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:53.661635    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:53.661649    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:53.677810    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:53.677821    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:53.689414    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:53.689424    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:53.715496    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:53.715511    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:53.729658    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:53.729668    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:53.741581    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:53.741597    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:53.778134    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:53.778144    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:53.814672    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:53.814683    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:53.826449    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:53.826463    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:53.843730    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:53.843741    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:53.848283    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:53.848290    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:53.862395    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:53.862404    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:53.886346    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:53.886357    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:53.910630    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:53.910640    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:54.623039    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:54.623284    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:54.644543    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:41:54.644648    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:54.659174    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:41:54.659260    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:54.671576    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:41:54.671664    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:54.682110    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:41:54.682202    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:54.693585    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:41:54.693670    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:54.704126    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:41:54.704212    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:54.714117    4051 logs.go:282] 0 containers: []
	W1014 07:41:54.714129    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:54.714199    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:54.724698    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:41:54.724714    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:54.724719    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:54.764861    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:41:54.764871    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:41:54.775481    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:41:54.775495    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:41:54.787725    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:41:54.787739    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:41:54.798653    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:41:54.798667    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:41:54.810263    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:41:54.810276    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:41:54.825200    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:41:54.825218    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:54.837713    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:54.837725    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:54.862591    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:54.862598    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:54.866682    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:41:54.866688    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:41:54.880719    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:41:54.880732    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:41:54.895723    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:41:54.895733    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:41:54.907314    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:41:54.907328    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:41:54.918702    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:41:54.918712    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:41:54.935930    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:54.935940    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:54.974114    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:41:54.974127    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:41:54.987807    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:41:54.987819    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:41:56.426242    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:57.500697    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:01.428402    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:01.428659    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:01.447301    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:01.447404    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:01.461168    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:01.461258    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:01.472381    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:01.472459    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:01.482791    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:01.482882    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:01.494008    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:01.494086    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:01.505023    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:01.505110    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:01.515383    4105 logs.go:282] 0 containers: []
	W1014 07:42:01.515396    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:01.515468    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:01.525860    4105 logs.go:282] 0 containers: []
	W1014 07:42:01.525870    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:01.525879    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:01.525884    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:01.530470    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:01.530477    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:01.542704    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:01.542714    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:01.567171    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:01.567179    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:01.578847    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:01.578882    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:01.599556    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:01.599569    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:01.617833    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:01.617847    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:01.654782    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:01.654792    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:01.692652    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:01.692665    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:01.723558    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:01.723571    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:01.753488    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:01.753500    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:01.767494    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:01.767507    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:01.782674    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:01.782687    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:01.799852    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:01.799862    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:01.812032    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:01.812044    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:02.501577    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:02.501897    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:02.526977    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:02.527134    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:02.546066    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:02.546161    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:02.568267    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:02.568347    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:02.579360    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:02.579453    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:02.589937    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:02.590014    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:02.611883    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:02.611961    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:02.622197    4051 logs.go:282] 0 containers: []
	W1014 07:42:02.622209    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:02.622278    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:02.633197    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:02.633215    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:02.633221    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:02.638032    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:02.638041    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:02.651942    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:02.651951    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:02.663365    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:02.663376    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:02.706231    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:02.706240    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:02.721236    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:02.721246    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:02.733513    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:02.733526    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:02.751586    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:02.751596    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:02.762941    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:02.762953    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:02.774380    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:02.774394    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:02.786016    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:02.786027    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:02.797850    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:02.797861    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:02.832910    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:02.832921    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:02.845000    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:02.845012    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:02.857238    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:02.857249    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:02.868531    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:02.868543    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:02.880154    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:02.880163    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:05.408284    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:04.329306    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:10.410860    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:10.411100    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:10.444029    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:10.444129    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:10.458538    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:10.458628    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:10.469965    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:10.470050    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:10.480448    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:10.480529    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:10.498625    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:10.498702    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:10.509725    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:10.509807    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:10.519880    4051 logs.go:282] 0 containers: []
	W1014 07:42:10.519890    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:10.519961    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:10.530144    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:10.530159    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:10.530165    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:09.332006    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:09.332439    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:09.365760    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:09.365907    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:09.384502    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:09.384617    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:09.398142    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:09.398235    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:09.410832    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:09.410911    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:09.421509    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:09.421598    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:09.433158    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:09.433245    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:09.443539    4105 logs.go:282] 0 containers: []
	W1014 07:42:09.443551    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:09.443612    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:09.454481    4105 logs.go:282] 0 containers: []
	W1014 07:42:09.454501    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:09.454509    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:09.454515    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:09.479839    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:09.479857    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:09.518914    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:09.518926    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:09.523535    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:09.523543    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:09.558041    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:09.558052    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:09.576173    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:09.576183    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:09.588771    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:09.588783    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:09.613233    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:09.613243    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:09.624701    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:09.624713    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:09.639481    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:09.639494    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:09.653969    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:09.653980    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:09.672970    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:09.672983    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:09.688250    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:09.688260    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:09.700375    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:09.700387    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:09.718708    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:09.718721    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:12.232728    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:10.541079    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:10.541092    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:10.552225    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:10.552240    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:10.576229    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:10.576236    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:10.587796    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:10.587806    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:10.624621    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:10.624631    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:10.636588    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:10.636598    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:10.648097    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:10.648109    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:10.659343    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:10.659356    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:10.681115    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:10.681128    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:10.693633    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:10.693645    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:10.698389    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:10.698398    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:10.709569    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:10.709581    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:10.723439    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:10.723449    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:10.764108    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:10.764121    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:10.778868    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:10.778884    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:10.790418    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:10.790435    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:13.304618    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:17.235100    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:17.235527    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:17.266949    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:17.267104    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:17.286707    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:17.286824    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:17.301143    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:17.301220    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:17.312923    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:17.313002    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:17.328815    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:17.328885    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:17.339687    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:17.339761    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:17.350449    4105 logs.go:282] 0 containers: []
	W1014 07:42:17.350460    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:17.350520    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:17.361512    4105 logs.go:282] 0 containers: []
	W1014 07:42:17.361523    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:17.361530    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:17.361537    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:17.376588    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:17.376602    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:17.392458    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:17.392470    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:17.404260    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:17.404270    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:17.428210    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:17.428219    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:17.439332    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:17.439342    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:17.477418    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:17.477432    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:17.492403    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:17.492415    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:17.506964    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:17.506975    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:17.521316    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:17.521331    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:17.525476    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:17.525485    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:17.561657    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:17.561668    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:17.587268    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:17.587283    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:17.604969    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:17.604980    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:17.619530    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:17.619545    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:18.305990    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:18.306135    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:18.319705    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:18.319795    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:18.331024    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:18.331105    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:18.341550    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:18.341618    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:18.352197    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:18.352273    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:18.363283    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:18.363352    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:18.374440    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:18.374505    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:18.384684    4051 logs.go:282] 0 containers: []
	W1014 07:42:18.384697    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:18.384765    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:18.395655    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:18.395677    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:18.395683    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:18.399958    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:18.399965    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:18.411388    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:18.411401    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:18.434937    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:18.434943    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:18.475768    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:18.475775    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:18.516931    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:18.516944    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:18.528488    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:18.528504    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:18.539796    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:18.539808    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:18.552520    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:18.552534    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:18.563984    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:18.563995    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:18.575755    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:18.575767    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:18.594333    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:18.594344    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:18.608760    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:18.608770    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:18.620097    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:18.620107    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:18.631024    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:18.631037    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:18.642980    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:18.642990    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:18.663222    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:18.663231    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:20.132658    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:21.179916    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:25.134854    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:25.134986    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:25.147338    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:25.147428    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:25.157697    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:25.157776    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:25.167953    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:25.168033    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:25.184848    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:25.184931    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:25.194907    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:25.194982    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:25.205744    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:25.205815    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:25.215998    4105 logs.go:282] 0 containers: []
	W1014 07:42:25.216018    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:25.216080    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:25.226462    4105 logs.go:282] 0 containers: []
	W1014 07:42:25.226474    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:25.226481    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:25.226486    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:25.251428    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:25.251437    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:25.262490    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:25.262501    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:25.278937    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:25.278947    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:25.290577    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:25.290586    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:25.325315    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:25.325329    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:25.339698    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:25.339710    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:25.361851    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:25.361864    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:25.377144    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:25.377156    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:25.391147    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:25.391158    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:25.414278    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:25.414288    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:25.419016    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:25.419023    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:25.433611    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:25.433621    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:25.458516    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:25.458525    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:25.470741    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:25.470755    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:28.012129    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:26.182307    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:26.182642    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:26.213504    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:26.213643    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:26.230976    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:26.231078    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:26.245023    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:26.245108    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:26.256702    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:26.256788    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:26.270831    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:26.270913    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:26.284388    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:26.284467    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:26.294705    4051 logs.go:282] 0 containers: []
	W1014 07:42:26.294716    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:26.294795    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:26.304847    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:26.304871    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:26.304877    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:26.316363    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:26.316373    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:26.330028    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:26.330043    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:26.373625    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:26.373635    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:26.388550    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:26.388560    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:26.400134    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:26.400146    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:26.411373    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:26.411386    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:26.433318    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:26.433329    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:26.444673    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:26.444684    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:26.470912    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:26.470923    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:26.506046    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:26.506058    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:26.520070    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:26.520083    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:26.535038    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:26.535048    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:26.547060    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:26.547070    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:26.557828    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:26.557850    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:26.562308    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:26.562314    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:26.580794    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:26.580805    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:29.095505    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:33.014276    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:33.014537    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:33.035210    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:33.035321    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:33.051020    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:33.051109    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:33.064061    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:33.064138    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:33.081390    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:33.081481    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:33.091979    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:33.092056    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:33.102791    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:33.102869    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:33.113309    4105 logs.go:282] 0 containers: []
	W1014 07:42:33.113321    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:33.113390    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:33.124143    4105 logs.go:282] 0 containers: []
	W1014 07:42:33.124158    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:33.124166    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:33.124171    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:33.164481    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:33.164494    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:33.168798    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:33.168806    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:33.182465    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:33.182476    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:33.198074    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:33.198086    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:33.216641    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:33.216651    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:33.241263    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:33.241273    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:33.252431    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:33.252442    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:33.266183    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:33.266194    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:33.281288    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:33.281298    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:33.293384    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:33.293395    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:33.316276    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:33.316283    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:33.327850    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:33.327862    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:33.363112    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:33.363124    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:33.384235    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:33.384247    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:34.097629    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:34.097760    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:34.108848    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:34.108937    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:34.119587    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:34.119663    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:34.129677    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:34.129761    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:34.140686    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:34.140762    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:34.151949    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:34.152027    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:34.162923    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:34.162996    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:34.173220    4051 logs.go:282] 0 containers: []
	W1014 07:42:34.173237    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:34.173307    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:34.199541    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:34.199558    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:34.199563    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:34.218805    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:34.218821    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:34.231750    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:34.231763    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:34.242826    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:34.242837    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:34.246992    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:34.246999    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:34.282571    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:34.282583    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:34.296249    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:34.296260    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:34.307708    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:34.307717    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:34.318595    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:34.318608    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:34.343633    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:34.343640    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:34.355534    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:34.355547    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:34.398024    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:34.398030    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:34.412745    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:34.412763    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:34.423768    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:34.423780    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:34.435241    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:34.435253    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:34.458852    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:34.458864    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:34.473376    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:34.473385    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:35.899668    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:36.988191    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:40.901858    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:40.902161    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:40.930157    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:40.930307    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:40.948496    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:40.948585    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:40.962292    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:40.962373    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:40.976807    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:40.976894    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:40.987875    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:40.987947    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:40.998958    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:40.999023    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:41.009371    4105 logs.go:282] 0 containers: []
	W1014 07:42:41.009382    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:41.009449    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:41.019276    4105 logs.go:282] 0 containers: []
	W1014 07:42:41.019289    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:41.019296    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:41.019302    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:41.041517    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:41.041529    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:41.068932    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:41.068943    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:41.080617    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:41.080629    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:41.115737    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:41.115750    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:41.130025    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:41.130034    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:41.141743    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:41.141756    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:41.155353    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:41.155363    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:41.178580    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:41.178590    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:41.182682    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:41.182689    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:41.200391    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:41.200402    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:41.212822    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:41.212832    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:41.224067    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:41.224080    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:41.238452    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:41.238461    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:41.254076    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:41.254106    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:43.793763    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:41.990395    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:41.990794    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:42.020537    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:42.020690    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:42.038620    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:42.038735    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:42.052994    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:42.053089    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:42.065126    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:42.065209    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:42.076006    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:42.076086    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:42.088555    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:42.088627    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:42.099802    4051 logs.go:282] 0 containers: []
	W1014 07:42:42.099815    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:42.099888    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:42.110580    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:42.110600    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:42.110606    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:42.121666    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:42.121678    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:42.135183    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:42.135195    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:42.152746    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:42.152757    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:42.195792    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:42.195805    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:42.233864    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:42.233876    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:42.248409    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:42.248420    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:42.262077    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:42.262090    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:42.273958    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:42.273970    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:42.278527    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:42.278534    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:42.289601    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:42.289613    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:42.301377    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:42.301392    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:42.317442    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:42.317452    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:42.329031    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:42.329043    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:42.340058    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:42.340074    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:42.351183    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:42.351193    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:42.362399    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:42.362409    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:44.889230    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:48.796345    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:48.796771    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:48.828002    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:48.828152    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:48.845988    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:48.846090    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:48.860022    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:48.860110    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:48.871484    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:48.871567    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:48.881972    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:48.882045    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:48.897184    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:48.897262    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:48.907221    4105 logs.go:282] 0 containers: []
	W1014 07:42:48.907233    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:48.907290    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:48.917991    4105 logs.go:282] 0 containers: []
	W1014 07:42:48.918001    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:48.918008    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:48.918013    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:48.931790    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:48.931803    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:48.948920    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:48.948946    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:48.973968    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:48.973978    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:48.984953    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:48.984965    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:49.002484    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:49.002496    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:49.013922    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:49.013931    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:49.891725    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:49.891914    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:49.907914    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:49.908006    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:49.920408    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:49.920487    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:49.930966    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:49.931049    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:49.941322    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:49.941403    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:49.952156    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:49.952236    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:49.970498    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:49.970567    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:49.986279    4051 logs.go:282] 0 containers: []
	W1014 07:42:49.986296    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:49.986362    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:50.006352    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:50.006369    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:50.006376    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:50.018134    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:50.018155    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:50.029537    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:50.029548    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:50.041519    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:50.041530    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:50.058791    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:50.058800    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:50.095347    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:50.095357    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:50.107143    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:50.107156    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:50.122827    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:50.122842    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:50.127273    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:50.127280    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:50.143087    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:50.143097    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:50.154252    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:50.154264    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:50.177697    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:50.177705    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:50.193226    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:50.193237    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:50.235140    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:50.235149    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:50.248533    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:50.248542    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:50.261022    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:50.261033    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:50.272687    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:50.272697    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:49.026071    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:49.026081    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:49.030410    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:49.030417    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:49.064739    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:49.064750    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:49.079561    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:49.079573    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:49.092847    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:49.092858    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:49.106423    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:49.106434    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:49.129424    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:49.129433    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:49.167699    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:49.167713    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:51.686156    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:52.786462    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:56.688651    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:56.688780    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:56.701684    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:56.701791    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:56.712859    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:56.712944    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:56.723959    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:56.724023    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:56.734659    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:56.734737    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:56.745439    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:56.745505    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:56.756679    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:56.756740    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:56.765994    4105 logs.go:282] 0 containers: []
	W1014 07:42:56.766006    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:56.766060    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:56.782766    4105 logs.go:282] 0 containers: []
	W1014 07:42:56.782777    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:56.782786    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:56.782791    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:56.787664    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:56.787670    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:56.801916    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:56.801929    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:56.820392    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:56.820404    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:56.832489    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:56.832500    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:56.844481    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:56.844491    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:56.885363    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:56.885376    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:56.921799    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:56.921809    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:56.947590    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:56.947605    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:56.962972    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:56.962983    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:56.976776    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:56.976786    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:56.990701    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:56.990711    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:57.004839    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:57.004854    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:57.015699    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:57.015711    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:57.038412    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:57.038419    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:57.787978    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:57.788406    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:57.821811    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:42:57.821952    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:57.843708    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:42:57.843808    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:57.868090    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:42:57.868182    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:57.891983    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:42:57.892064    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:57.910073    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:42:57.910156    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:57.920739    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:42:57.920816    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:57.931872    4051 logs.go:282] 0 containers: []
	W1014 07:42:57.931883    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:57.931947    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:57.942401    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:42:57.942421    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:57.942426    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:57.946702    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:42:57.946710    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:42:57.961687    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:42:57.961699    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:42:57.986211    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:42:57.986224    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:57.997849    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:42:57.997859    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:42:58.012586    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:42:58.012594    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:42:58.024177    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:58.024187    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:58.048612    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:58.048622    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:58.089102    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:42:58.089114    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:42:58.102943    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:42:58.102953    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:42:58.120877    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:42:58.120888    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:42:58.132620    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:42:58.132634    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:42:58.144721    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:42:58.144731    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:42:58.156373    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:58.156384    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:58.195981    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:42:58.195991    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:42:58.210951    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:42:58.210964    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:42:58.222763    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:42:58.222773    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:42:59.551850    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:00.736887    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:04.554323    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:04.554564    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:04.570561    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:04.570659    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:04.584059    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:04.584132    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:04.595202    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:04.595266    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:04.606278    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:04.606361    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:04.617506    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:04.617584    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:04.628521    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:04.628595    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:04.639210    4105 logs.go:282] 0 containers: []
	W1014 07:43:04.639222    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:04.639290    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:04.651833    4105 logs.go:282] 0 containers: []
	W1014 07:43:04.651845    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:04.651852    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:04.651857    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:04.688490    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:04.688505    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:04.692636    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:04.692643    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:04.727801    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:04.727814    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:04.742190    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:04.742203    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:04.769386    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:04.769396    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:04.786170    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:04.786179    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:04.800271    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:04.800283    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:04.812469    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:04.812481    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:04.827091    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:04.827104    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:04.841201    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:04.841212    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:04.853144    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:04.853158    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:04.875593    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:04.875603    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:04.890062    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:04.890075    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:04.907858    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:04.907868    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:07.422168    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:05.738153    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:05.738268    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:05.749211    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:43:05.749295    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:05.759917    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:43:05.759993    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:05.770105    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:43:05.770181    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:05.780787    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:43:05.780870    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:05.791135    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:43:05.791208    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:05.801815    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:43:05.801897    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:05.812406    4051 logs.go:282] 0 containers: []
	W1014 07:43:05.812423    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:05.812496    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:05.822911    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:43:05.822929    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:05.822935    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:05.858664    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:43:05.858675    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:43:05.872583    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:43:05.872601    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:43:05.886560    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:43:05.886572    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:43:05.898269    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:05.898285    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:05.922503    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:43:05.922513    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:05.935955    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:05.935966    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:05.976681    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:43:05.976688    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:43:05.987750    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:43:05.987762    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:43:06.003258    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:43:06.003268    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:43:06.015094    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:43:06.015107    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:43:06.027051    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:43:06.027061    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:43:06.038111    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:06.038122    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:06.042535    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:43:06.042541    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:43:06.053959    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:43:06.053972    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:43:06.065150    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:43:06.065163    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:43:06.083155    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:43:06.083172    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:43:08.596424    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:12.424794    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:12.424999    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:12.441061    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:12.441151    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:12.453292    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:12.453374    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:12.463855    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:12.463935    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:12.484568    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:12.484643    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:12.496191    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:12.496276    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:12.507096    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:12.507167    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:12.517954    4105 logs.go:282] 0 containers: []
	W1014 07:43:12.517966    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:12.518030    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:12.528637    4105 logs.go:282] 0 containers: []
	W1014 07:43:12.528648    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:12.528655    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:12.528661    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:12.544540    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:12.544550    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:12.556658    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:12.556669    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:12.570476    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:12.570487    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:12.582460    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:12.582472    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:12.601027    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:12.601043    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:12.615438    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:12.615448    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:12.651677    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:12.651687    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:12.663485    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:12.663496    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:12.684653    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:12.684664    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:12.707345    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:12.707354    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:12.744222    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:12.744230    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:12.768310    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:12.768324    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:12.782186    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:12.782197    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:12.796882    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:12.796893    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:13.598893    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:13.599109    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:13.618137    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:43:13.618248    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:13.631561    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:43:13.631650    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:13.643818    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:43:13.643896    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:13.654598    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:43:13.654680    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:13.664761    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:43:13.664848    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:13.676055    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:43:13.676136    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:13.686433    4051 logs.go:282] 0 containers: []
	W1014 07:43:13.686444    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:13.686511    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:13.697133    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:43:13.697152    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:43:13.697158    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:43:13.708226    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:43:13.708238    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:43:13.719827    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:13.719837    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:13.743211    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:43:13.743219    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:43:13.757140    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:43:13.757155    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:43:13.774674    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:43:13.774685    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:43:13.785516    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:43:13.785527    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:43:13.797190    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:13.797200    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:13.831646    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:43:13.831656    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:43:13.843182    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:43:13.843197    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:43:13.854285    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:43:13.854303    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:13.867068    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:13.867079    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:13.910602    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:13.910609    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:13.914782    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:43:13.914790    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:43:13.925991    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:43:13.926003    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:43:13.938015    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:43:13.938026    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:43:13.952045    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:43:13.952056    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:43:15.302849    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:16.471728    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:20.303444    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:20.303593    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:20.315340    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:20.315429    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:20.325994    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:20.326070    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:20.336983    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:20.337072    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:20.348359    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:20.348447    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:20.358725    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:20.358803    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:20.369600    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:20.369676    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:20.383959    4105 logs.go:282] 0 containers: []
	W1014 07:43:20.383969    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:20.384035    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:20.393958    4105 logs.go:282] 0 containers: []
	W1014 07:43:20.393972    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:20.393979    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:20.393986    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:20.405780    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:20.405790    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:20.430593    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:20.430605    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:20.444201    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:20.444215    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:20.463801    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:20.463812    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:20.481414    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:20.481426    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:20.505305    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:20.505315    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:20.544100    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:20.544108    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:20.557393    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:20.557409    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:20.562245    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:20.562254    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:20.597015    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:20.597026    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:20.611309    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:20.611321    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:20.634393    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:20.634408    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:20.645980    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:20.645993    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:20.657491    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:20.657505    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:23.172211    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:21.473936    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:21.474373    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:21.514613    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:43:21.514748    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:21.531869    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:43:21.531969    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:21.545437    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:43:21.545525    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:21.556575    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:43:21.556658    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:21.567746    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:43:21.567826    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:21.578614    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:43:21.578681    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:21.589554    4051 logs.go:282] 0 containers: []
	W1014 07:43:21.589566    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:21.589637    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:21.600644    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:43:21.600662    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:43:21.600668    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:43:21.626577    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:21.626591    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:21.649136    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:43:21.649143    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:21.660743    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:43:21.660753    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:43:21.672816    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:43:21.672827    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:43:21.684889    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:43:21.684902    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:43:21.696828    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:43:21.696843    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:43:21.708498    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:43:21.708509    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:43:21.719944    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:43:21.719955    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:43:21.735120    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:43:21.735134    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:43:21.746549    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:43:21.746564    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:43:21.757765    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:21.757775    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:21.761902    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:43:21.761907    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:43:21.775620    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:43:21.775630    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:43:21.787658    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:21.787672    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:21.827960    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:21.827968    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:21.864904    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:43:21.864916    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:43:24.381797    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:28.174465    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:28.174780    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:28.203413    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:28.203538    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:28.222888    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:28.222991    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:28.235800    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:28.235887    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:28.247449    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:28.247531    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:28.257580    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:28.257660    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:28.267727    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:28.267810    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:28.279834    4105 logs.go:282] 0 containers: []
	W1014 07:43:28.279847    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:28.279922    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:28.290092    4105 logs.go:282] 0 containers: []
	W1014 07:43:28.290102    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:28.290110    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:28.290115    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:28.301906    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:28.301917    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:28.320946    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:28.320956    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:28.337157    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:28.337167    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:28.348973    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:28.348984    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:28.387037    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:28.387045    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:28.409326    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:28.409342    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:28.430334    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:28.430343    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:28.456331    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:28.456342    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:28.468784    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:28.468797    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:28.483358    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:28.483369    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:28.506820    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:28.506829    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:28.510969    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:28.510975    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:28.545350    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:28.545361    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:28.560061    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:28.560073    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:29.382661    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:29.382850    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:29.407212    4051 logs.go:282] 2 containers: [ce16068a72a5 56f1d1357c30]
	I1014 07:43:29.407305    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:29.419704    4051 logs.go:282] 2 containers: [1540d4312173 b514b0e417d6]
	I1014 07:43:29.419789    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:29.429895    4051 logs.go:282] 1 containers: [76bee3516fb4]
	I1014 07:43:29.429976    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:29.440641    4051 logs.go:282] 2 containers: [d6557d690d32 7a4de019e2fd]
	I1014 07:43:29.440729    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:29.451337    4051 logs.go:282] 1 containers: [883ef2ac2df5]
	I1014 07:43:29.451409    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:29.462572    4051 logs.go:282] 2 containers: [8878f9c55ea3 b526b940abeb]
	I1014 07:43:29.462649    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:29.472769    4051 logs.go:282] 0 containers: []
	W1014 07:43:29.472780    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:29.472843    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:29.483713    4051 logs.go:282] 2 containers: [c23d96b9bc6f 03a0f6a822b1]
	I1014 07:43:29.483732    4051 logs.go:123] Gathering logs for etcd [1540d4312173] ...
	I1014 07:43:29.483739    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1540d4312173"
	I1014 07:43:29.504026    4051 logs.go:123] Gathering logs for kube-proxy [883ef2ac2df5] ...
	I1014 07:43:29.504036    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 883ef2ac2df5"
	I1014 07:43:29.516118    4051 logs.go:123] Gathering logs for kube-controller-manager [8878f9c55ea3] ...
	I1014 07:43:29.516127    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8878f9c55ea3"
	I1014 07:43:29.533214    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:43:29.533223    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:29.546383    4051 logs.go:123] Gathering logs for kube-scheduler [7a4de019e2fd] ...
	I1014 07:43:29.546393    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4de019e2fd"
	I1014 07:43:29.558308    4051 logs.go:123] Gathering logs for kube-apiserver [ce16068a72a5] ...
	I1014 07:43:29.558321    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce16068a72a5"
	I1014 07:43:29.572377    4051 logs.go:123] Gathering logs for kube-apiserver [56f1d1357c30] ...
	I1014 07:43:29.572388    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f1d1357c30"
	I1014 07:43:29.583688    4051 logs.go:123] Gathering logs for etcd [b514b0e417d6] ...
	I1014 07:43:29.583699    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b514b0e417d6"
	I1014 07:43:29.603486    4051 logs.go:123] Gathering logs for coredns [76bee3516fb4] ...
	I1014 07:43:29.603497    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76bee3516fb4"
	I1014 07:43:29.615402    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:29.615411    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:29.652367    4051 logs.go:123] Gathering logs for kube-controller-manager [b526b940abeb] ...
	I1014 07:43:29.652381    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b526b940abeb"
	I1014 07:43:29.663608    4051 logs.go:123] Gathering logs for storage-provisioner [03a0f6a822b1] ...
	I1014 07:43:29.663622    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03a0f6a822b1"
	I1014 07:43:29.680017    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:29.680029    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:29.704137    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:29.704144    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:29.746124    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:29.746132    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:29.750415    4051 logs.go:123] Gathering logs for kube-scheduler [d6557d690d32] ...
	I1014 07:43:29.750423    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6557d690d32"
	I1014 07:43:29.761937    4051 logs.go:123] Gathering logs for storage-provisioner [c23d96b9bc6f] ...
	I1014 07:43:29.761947    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c23d96b9bc6f"
	I1014 07:43:31.078166    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:32.275549    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:37.277851    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:37.277945    4051 kubeadm.go:597] duration metric: took 4m4.247775583s to restartPrimaryControlPlane
	W1014 07:43:37.278023    4051 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 07:43:37.278052    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1014 07:43:38.307452    4051 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.029407084s)
	I1014 07:43:38.307537    4051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:43:38.312636    4051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:43:38.315464    4051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:43:38.318390    4051 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:43:38.318397    4051 kubeadm.go:157] found existing configuration files:
	
	I1014 07:43:38.318426    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/admin.conf
	I1014 07:43:38.321091    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:43:38.321119    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:43:38.323770    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/kubelet.conf
	I1014 07:43:38.326997    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:43:38.327025    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:43:38.330293    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/controller-manager.conf
	I1014 07:43:38.332948    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:43:38.332977    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:43:38.335560    4051 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/scheduler.conf
	I1014 07:43:38.338947    4051 kubeadm.go:163] "https://control-plane.minikube.internal:61423" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61423 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:43:38.338979    4051 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:43:38.341895    4051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:43:38.360758    4051 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1014 07:43:38.360819    4051 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:43:38.409362    4051 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:43:38.409419    4051 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:43:38.409465    4051 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 07:43:38.461932    4051 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:43:36.080763    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:36.081126    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:36.113340    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:36.113481    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:36.133648    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:36.133742    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:36.147409    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:36.147504    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:36.160179    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:36.160261    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:36.170730    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:36.170812    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:36.181748    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:36.181831    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:36.192583    4105 logs.go:282] 0 containers: []
	W1014 07:43:36.192597    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:36.192665    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:36.204874    4105 logs.go:282] 0 containers: []
	W1014 07:43:36.204885    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:36.204895    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:36.204901    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:36.219447    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:36.219458    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:36.245100    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:36.245114    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:36.257634    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:36.257646    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:36.272566    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:36.272577    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:36.295776    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:36.295784    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:36.299645    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:36.299651    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:36.335631    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:36.335641    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:36.350549    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:36.350560    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:36.392741    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:36.392756    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:36.432382    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:36.432391    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:36.446627    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:36.446637    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:36.468113    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:36.468125    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:36.482990    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:36.483000    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:36.495118    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:36.495132    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:39.008321    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:38.466102    4051 out.go:235]   - Generating certificates and keys ...
	I1014 07:43:38.466199    4051 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:43:38.466304    4051 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:43:38.466454    4051 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 07:43:38.466514    4051 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 07:43:38.466556    4051 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 07:43:38.466602    4051 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 07:43:38.466637    4051 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 07:43:38.466673    4051 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 07:43:38.466809    4051 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 07:43:38.466936    4051 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 07:43:38.466992    4051 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 07:43:38.467066    4051 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:43:38.513724    4051 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:43:38.656411    4051 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:43:38.745475    4051 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:43:39.046344    4051 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:43:39.078756    4051 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:43:39.079149    4051 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:43:39.079283    4051 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:43:39.164836    4051 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:43:39.167882    4051 out.go:235]   - Booting up control plane ...
	I1014 07:43:39.167927    4051 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:43:39.167967    4051 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:43:39.169800    4051 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:43:39.170012    4051 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:43:39.170780    4051 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 07:43:44.010421    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:44.010580    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:43.172171    4051 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001445 seconds
	I1014 07:43:43.172230    4051 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:43:43.175756    4051 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:43:43.685275    4051 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:43:43.685382    4051 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-116000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:43:44.193349    4051 kubeadm.go:310] [bootstrap-token] Using token: tk7xbh.c8eu9acuhz8aq2dm
	I1014 07:43:44.199658    4051 out.go:235]   - Configuring RBAC rules ...
	I1014 07:43:44.199733    4051 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:43:44.199784    4051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:43:44.206194    4051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:43:44.207101    4051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:43:44.208125    4051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:43:44.209182    4051 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:43:44.213174    4051 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:43:44.410044    4051 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:43:44.598535    4051 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:43:44.598987    4051 kubeadm.go:310] 
	I1014 07:43:44.599019    4051 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:43:44.599022    4051 kubeadm.go:310] 
	I1014 07:43:44.599086    4051 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:43:44.599093    4051 kubeadm.go:310] 
	I1014 07:43:44.599105    4051 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:43:44.599135    4051 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:43:44.599168    4051 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:43:44.599191    4051 kubeadm.go:310] 
	I1014 07:43:44.599233    4051 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:43:44.599238    4051 kubeadm.go:310] 
	I1014 07:43:44.599263    4051 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:43:44.599268    4051 kubeadm.go:310] 
	I1014 07:43:44.599295    4051 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:43:44.599349    4051 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:43:44.599434    4051 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:43:44.599439    4051 kubeadm.go:310] 
	I1014 07:43:44.599488    4051 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:43:44.599558    4051 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:43:44.599562    4051 kubeadm.go:310] 
	I1014 07:43:44.599622    4051 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tk7xbh.c8eu9acuhz8aq2dm \
	I1014 07:43:44.599691    4051 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:faabd13cfdf25c259cb25d1f4d857023428bd020fe52b3b863fea78f48891e14 \
	I1014 07:43:44.599704    4051 kubeadm.go:310] 	--control-plane 
	I1014 07:43:44.599707    4051 kubeadm.go:310] 
	I1014 07:43:44.599774    4051 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:43:44.599780    4051 kubeadm.go:310] 
	I1014 07:43:44.599828    4051 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tk7xbh.c8eu9acuhz8aq2dm \
	I1014 07:43:44.599882    4051 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:faabd13cfdf25c259cb25d1f4d857023428bd020fe52b3b863fea78f48891e14 
	I1014 07:43:44.600013    4051 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:43:44.600036    4051 cni.go:84] Creating CNI manager for ""
	I1014 07:43:44.600046    4051 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:43:44.602834    4051 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 07:43:44.609798    4051 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 07:43:44.612728    4051 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 07:43:44.617383    4051 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:43:44.617433    4051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:43:44.617648    4051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-116000 minikube.k8s.io/updated_at=2024_10_14T07_43_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=running-upgrade-116000 minikube.k8s.io/primary=true
	I1014 07:43:44.660096    4051 kubeadm.go:1113] duration metric: took 42.707709ms to wait for elevateKubeSystemPrivileges
	I1014 07:43:44.660116    4051 ops.go:34] apiserver oom_adj: -16
	I1014 07:43:44.666395    4051 kubeadm.go:394] duration metric: took 4m11.667224s to StartCluster
	I1014 07:43:44.666412    4051 settings.go:142] acquiring lock: {Name:mk5f137d4011ca4bbc3c8514f15406fc4b6b595c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:43:44.666525    4051 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:43:44.666966    4051 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/kubeconfig: {Name:mkbe79fce3a1d9ddd6036a978e097f20767985b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:43:44.667339    4051 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:43:44.667398    4051 config.go:182] Loaded profile config "running-upgrade-116000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1014 07:43:44.667387    4051 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:43:44.667497    4051 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-116000"
	I1014 07:43:44.667504    4051 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-116000"
	W1014 07:43:44.667507    4051 addons.go:243] addon storage-provisioner should already be in state true
	I1014 07:43:44.667521    4051 host.go:66] Checking if "running-upgrade-116000" exists ...
	I1014 07:43:44.667535    4051 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-116000"
	I1014 07:43:44.667564    4051 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-116000"
	I1014 07:43:44.668940    4051 kapi.go:59] client config for running-upgrade-116000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/running-upgrade-116000/client.key", CAFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10257ae40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:43:44.669337    4051 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-116000"
	W1014 07:43:44.669342    4051 addons.go:243] addon default-storageclass should already be in state true
	I1014 07:43:44.669349    4051 host.go:66] Checking if "running-upgrade-116000" exists ...
	I1014 07:43:44.671736    4051 out.go:177] * Verifying Kubernetes components...
	I1014 07:43:44.672129    4051 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:43:44.675898    4051 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:43:44.675905    4051 sshutil.go:53] new ssh client: &{IP:localhost Port:61391 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/running-upgrade-116000/id_rsa Username:docker}
	I1014 07:43:44.679741    4051 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:43:44.683751    4051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:43:44.687853    4051 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:43:44.687869    4051 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:43:44.687881    4051 sshutil.go:53] new ssh client: &{IP:localhost Port:61391 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/running-upgrade-116000/id_rsa Username:docker}
	I1014 07:43:44.783103    4051 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:43:44.788623    4051 api_server.go:52] waiting for apiserver process to appear ...
	I1014 07:43:44.788678    4051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:43:44.792406    4051 api_server.go:72] duration metric: took 125.057209ms to wait for apiserver process to appear ...
	I1014 07:43:44.792415    4051 api_server.go:88] waiting for apiserver healthz status ...
	I1014 07:43:44.792422    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:44.839656    4051 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:43:44.862859    4051 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:43:45.154848    4051 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:43:45.154860    4051 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:43:44.022354    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:44.022445    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:44.033534    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:44.033615    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:44.044730    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:44.044812    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:44.055400    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:44.055491    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:44.065828    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:44.065903    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:44.076335    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:44.076412    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:44.091504    4105 logs.go:282] 0 containers: []
	W1014 07:43:44.091517    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:44.091582    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:44.103169    4105 logs.go:282] 0 containers: []
	W1014 07:43:44.103181    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:44.103191    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:44.103201    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:44.140060    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:44.140071    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:44.179484    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:44.179501    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:44.207646    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:44.207657    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:44.220980    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:44.220991    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:44.235224    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:44.235238    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:44.239803    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:44.239812    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:44.254600    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:44.254617    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:44.272036    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:44.272051    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:44.287900    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:44.287910    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:44.312628    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:44.312644    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:44.328105    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:44.328119    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:44.339785    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:44.339798    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:44.353539    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:44.353550    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:44.373190    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:44.373205    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:46.887256    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:49.794425    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:49.794505    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:51.889541    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:51.889869    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:51.919560    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:51.919706    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:51.935701    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:51.935804    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:51.949499    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:51.949585    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:51.960472    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:51.960559    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:51.971495    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:51.971572    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:51.981960    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:51.982041    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:51.992648    4105 logs.go:282] 0 containers: []
	W1014 07:43:51.992660    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:51.992730    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:52.003281    4105 logs.go:282] 0 containers: []
	W1014 07:43:52.003292    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:52.003299    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:52.003304    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:52.007470    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:52.007480    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:52.030732    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:52.030739    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:52.042650    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:52.042660    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:52.067569    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:52.067579    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:52.081638    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:52.081648    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:52.096386    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:52.096395    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:52.114113    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:52.114123    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:52.127948    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:52.127959    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:52.166523    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:52.166535    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:52.205186    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:52.205199    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:52.219456    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:52.219466    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:52.233894    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:52.233904    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:52.246238    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:52.246249    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:52.261919    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:52.261929    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:54.794875    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:54.794905    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:54.775875    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:59.795062    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:59.795079    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:59.778029    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:59.778154    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:59.790105    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:59.790182    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:59.800824    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:59.800903    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:59.811329    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:59.811413    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:59.821941    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:59.822011    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:59.832556    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:59.832632    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:59.843349    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:59.843428    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:59.853804    4105 logs.go:282] 0 containers: []
	W1014 07:43:59.853814    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:59.853874    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:59.864488    4105 logs.go:282] 0 containers: []
	W1014 07:43:59.864500    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:59.864507    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:59.864513    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:59.877161    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:59.877171    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:59.895078    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:59.895094    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:59.909634    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:59.909649    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:59.921438    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:59.921450    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:59.944123    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:59.944132    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:59.981752    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:59.981761    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:44:00.000044    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:44:00.000058    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:44:00.025175    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:44:00.025189    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:44:00.043149    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:44:00.043160    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:44:00.058140    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:44:00.058153    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:44:00.062252    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:44:00.062260    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:44:00.099841    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:44:00.099854    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:44:00.111562    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:44:00.111573    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:44:00.122879    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:44:00.122891    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:44:02.643742    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:04.804478    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:04.804543    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:07.653634    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:07.653979    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:44:07.682200    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:44:07.682326    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:44:07.697756    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:44:07.697848    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:44:07.709991    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:44:07.710076    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:44:07.720669    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:44:07.720741    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:44:07.733825    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:44:07.733934    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:44:07.748633    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:44:07.748717    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:44:07.760371    4105 logs.go:282] 0 containers: []
	W1014 07:44:07.760383    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:44:07.760453    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:44:07.770622    4105 logs.go:282] 0 containers: []
	W1014 07:44:07.770634    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:44:07.770642    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:44:07.770647    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:44:07.782018    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:44:07.782029    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:44:07.797672    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:44:07.797685    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:44:07.810008    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:44:07.810019    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:44:07.836022    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:44:07.836033    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:44:07.849997    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:44:07.850010    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:44:07.890764    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:44:07.890777    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:44:07.905587    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:44:07.905599    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:44:07.910060    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:44:07.910066    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:44:07.944758    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:44:07.944767    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:44:07.956676    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:44:07.956687    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:44:07.970872    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:44:07.970881    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:44:08.006481    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:44:08.006492    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:44:08.020706    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:44:08.020720    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:44:08.034792    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:44:08.034804    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:44:09.811837    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:09.811889    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:10.556654    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:14.817504    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:14.817545    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1014 07:44:15.178440    4051 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1014 07:44:15.186836    4051 out.go:177] * Enabled addons: storage-provisioner
	I1014 07:44:15.192315    4051 addons.go:510] duration metric: took 30.504722417s for enable addons: enabled=[storage-provisioner]
	I1014 07:44:15.563581    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:15.563724    4105 kubeadm.go:597] duration metric: took 4m3.244681791s to restartPrimaryControlPlane
	W1014 07:44:15.563895    4105 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 07:44:15.563961    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1014 07:44:16.580457    4105 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.015729042s)
	I1014 07:44:16.580528    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:44:16.585460    4105 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:44:16.588324    4105 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:44:16.591110    4105 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:44:16.591115    4105 kubeadm.go:157] found existing configuration files:
	
	I1014 07:44:16.591147    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/admin.conf
	I1014 07:44:16.595936    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:44:16.595971    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:44:16.598756    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/kubelet.conf
	I1014 07:44:16.602077    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:44:16.602110    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:44:16.605191    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/controller-manager.conf
	I1014 07:44:16.607834    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:44:16.607862    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:44:16.610743    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/scheduler.conf
	I1014 07:44:16.613845    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:44:16.613872    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:44:16.616573    4105 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:44:16.634732    4105 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1014 07:44:16.634761    4105 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:44:16.682651    4105 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:44:16.682709    4105 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:44:16.682753    4105 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 07:44:16.732957    4105 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:44:16.738129    4105 out.go:235]   - Generating certificates and keys ...
	I1014 07:44:16.738254    4105 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:44:16.738440    4105 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:44:16.738491    4105 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 07:44:16.738523    4105 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 07:44:16.738561    4105 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 07:44:16.738591    4105 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 07:44:16.738632    4105 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 07:44:16.738662    4105 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 07:44:16.738699    4105 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 07:44:16.738744    4105 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 07:44:16.738765    4105 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 07:44:16.738795    4105 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:44:16.827466    4105 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:44:16.910164    4105 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:44:17.167559    4105 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:44:17.240156    4105 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:44:17.274826    4105 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:44:17.275247    4105 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:44:17.275273    4105 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:44:17.366266    4105 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:44:17.370366    4105 out.go:235]   - Booting up control plane ...
	I1014 07:44:17.370409    4105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:44:17.370468    4105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:44:17.370564    4105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:44:17.370651    4105 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:44:17.370885    4105 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 07:44:19.822038    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:19.822062    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:21.871103    4105 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502145 seconds
	I1014 07:44:21.871178    4105 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:44:21.875756    4105 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:44:22.384193    4105 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:44:22.384342    4105 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-496000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:44:22.890153    4105 kubeadm.go:310] [bootstrap-token] Using token: 7cxkf0.of17tz2v25ggwn3g
	I1014 07:44:22.893762    4105 out.go:235]   - Configuring RBAC rules ...
	I1014 07:44:22.893825    4105 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:44:22.893872    4105 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:44:22.895840    4105 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:44:22.897454    4105 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:44:22.898488    4105 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:44:22.899547    4105 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:44:22.902946    4105 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:44:23.069924    4105 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:44:23.295165    4105 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:44:23.295936    4105 kubeadm.go:310] 
	I1014 07:44:23.295972    4105 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:44:23.295978    4105 kubeadm.go:310] 
	I1014 07:44:23.296022    4105 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:44:23.296027    4105 kubeadm.go:310] 
	I1014 07:44:23.296038    4105 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:44:23.296074    4105 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:44:23.296101    4105 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:44:23.296105    4105 kubeadm.go:310] 
	I1014 07:44:23.296134    4105 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:44:23.296146    4105 kubeadm.go:310] 
	I1014 07:44:23.296170    4105 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:44:23.296175    4105 kubeadm.go:310] 
	I1014 07:44:23.296201    4105 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:44:23.296234    4105 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:44:23.296283    4105 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:44:23.296288    4105 kubeadm.go:310] 
	I1014 07:44:23.296328    4105 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:44:23.296377    4105 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:44:23.296382    4105 kubeadm.go:310] 
	I1014 07:44:23.296421    4105 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7cxkf0.of17tz2v25ggwn3g \
	I1014 07:44:23.296470    4105 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:faabd13cfdf25c259cb25d1f4d857023428bd020fe52b3b863fea78f48891e14 \
	I1014 07:44:23.296481    4105 kubeadm.go:310] 	--control-plane 
	I1014 07:44:23.296486    4105 kubeadm.go:310] 
	I1014 07:44:23.296527    4105 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:44:23.296531    4105 kubeadm.go:310] 
	I1014 07:44:23.296567    4105 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7cxkf0.of17tz2v25ggwn3g \
	I1014 07:44:23.296640    4105 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:faabd13cfdf25c259cb25d1f4d857023428bd020fe52b3b863fea78f48891e14 
	I1014 07:44:23.296846    4105 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:44:23.296940    4105 cni.go:84] Creating CNI manager for ""
	I1014 07:44:23.296950    4105 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:44:23.301475    4105 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 07:44:23.311620    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 07:44:23.318247    4105 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 07:44:23.325148    4105 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:44:23.325220    4105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:44:23.325295    4105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-496000 minikube.k8s.io/updated_at=2024_10_14T07_44_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=stopped-upgrade-496000 minikube.k8s.io/primary=true
	I1014 07:44:23.362028    4105 ops.go:34] apiserver oom_adj: -16
	I1014 07:44:23.362132    4105 kubeadm.go:1113] duration metric: took 36.961375ms to wait for elevateKubeSystemPrivileges
	I1014 07:44:23.372260    4105 kubeadm.go:394] duration metric: took 4m11.064688s to StartCluster
	I1014 07:44:23.372278    4105 settings.go:142] acquiring lock: {Name:mk5f137d4011ca4bbc3c8514f15406fc4b6b595c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:44:23.372369    4105 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:44:23.372756    4105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/kubeconfig: {Name:mkbe79fce3a1d9ddd6036a978e097f20767985b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:44:23.372929    4105 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:44:23.373032    4105 config.go:182] Loaded profile config "stopped-upgrade-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1014 07:44:23.372959    4105 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:44:23.373089    4105 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-496000"
	I1014 07:44:23.373099    4105 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-496000"
	W1014 07:44:23.373102    4105 addons.go:243] addon storage-provisioner should already be in state true
	I1014 07:44:23.373110    4105 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-496000"
	I1014 07:44:23.373140    4105 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-496000"
	I1014 07:44:23.373114    4105 host.go:66] Checking if "stopped-upgrade-496000" exists ...
	I1014 07:44:23.374412    4105 kapi.go:59] client config for stopped-upgrade-496000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/client.key", CAFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1064e6e40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:44:23.374557    4105 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-496000"
	W1014 07:44:23.374563    4105 addons.go:243] addon default-storageclass should already be in state true
	I1014 07:44:23.374570    4105 host.go:66] Checking if "stopped-upgrade-496000" exists ...
	I1014 07:44:23.376466    4105 out.go:177] * Verifying Kubernetes components...
	I1014 07:44:23.376863    4105 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:44:23.380738    4105 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:44:23.380751    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	I1014 07:44:23.384457    4105 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:44:23.388556    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:44:23.392551    4105 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:44:23.392558    4105 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:44:23.392566    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	I1014 07:44:23.482356    4105 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:44:23.487703    4105 api_server.go:52] waiting for apiserver process to appear ...
	I1014 07:44:23.487760    4105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:44:23.491906    4105 api_server.go:72] duration metric: took 118.912208ms to wait for apiserver process to appear ...
	I1014 07:44:23.491915    4105 api_server.go:88] waiting for apiserver healthz status ...
	I1014 07:44:23.491923    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:23.511351    4105 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:44:23.532934    4105 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:44:23.874887    4105 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:44:23.874899    4105 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:44:24.825811    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:24.825838    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:28.495917    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:28.495941    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:29.829182    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:29.829219    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:33.497545    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:33.497566    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:34.832630    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:34.832649    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:38.498818    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:38.498839    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:39.834231    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:39.834284    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:43.499886    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:43.499932    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:44.837222    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:44.837361    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:44:44.848499    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:44:44.848583    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:44:44.859452    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:44:44.859531    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:44:44.870390    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:44:44.870466    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:44:44.880834    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:44:44.880912    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:44:44.891622    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:44:44.891709    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:44:44.902232    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:44:44.902312    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:44:44.916770    4051 logs.go:282] 0 containers: []
	W1014 07:44:44.916778    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:44:44.916842    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:44:44.927653    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:44:44.927668    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:44:44.927674    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:44:44.941697    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:44:44.941710    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:44:44.953471    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:44:44.953484    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:44:44.969467    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:44:44.969476    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:44:44.983463    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:44:44.983476    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:44:44.987852    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:44:44.987859    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:44:45.023676    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:44:45.023686    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:44:45.038326    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:44:45.038339    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:44:45.050711    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:44:45.050724    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:44:45.063426    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:44:45.063436    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:44:45.080853    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:44:45.080866    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:44:45.092578    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:44:45.092592    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:44:45.117360    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:44:45.117367    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:44:48.500915    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:48.500956    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:47.655679    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:53.502035    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:53.502071    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1014 07:44:53.882779    4105 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1014 07:44:53.887103    4105 out.go:177] * Enabled addons: storage-provisioner
	I1014 07:44:53.898801    4105 addons.go:510] duration metric: took 30.519991834s for enable addons: enabled=[storage-provisioner]
	I1014 07:44:52.658272    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:52.658399    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:44:52.669789    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:44:52.669875    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:44:52.680038    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:44:52.680117    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:44:52.690695    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:44:52.690771    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:44:52.701426    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:44:52.701507    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:44:52.711860    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:44:52.711947    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:44:52.725950    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:44:52.726030    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:44:52.736569    4051 logs.go:282] 0 containers: []
	W1014 07:44:52.736583    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:44:52.736650    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:44:52.747833    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:44:52.747849    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:44:52.747855    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:44:52.768233    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:44:52.768242    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:44:52.779785    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:44:52.779794    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:44:52.806857    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:44:52.806873    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:44:52.841903    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:44:52.841913    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:44:52.879500    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:44:52.879513    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:44:52.893968    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:44:52.893978    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:44:52.905639    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:44:52.905650    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:44:52.917548    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:44:52.917557    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:44:52.922654    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:44:52.922661    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:44:52.937116    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:44:52.937127    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:44:52.948920    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:44:52.948929    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:44:52.970801    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:44:52.970811    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:44:55.485227    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:58.503094    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:58.503124    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:00.487680    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:00.487793    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:00.499343    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:00.499425    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:00.511380    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:00.511472    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:00.525459    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:00.525544    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:00.536598    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:00.536667    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:00.548233    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:00.548316    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:03.504334    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:03.504354    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:00.561515    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:00.561594    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:00.572417    4051 logs.go:282] 0 containers: []
	W1014 07:45:00.572429    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:00.572494    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:00.582967    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:00.582984    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:00.582990    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:00.595060    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:00.595071    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:00.631629    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:00.631640    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:00.648073    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:00.648083    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:00.665030    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:00.665040    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:00.676159    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:00.676170    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:00.687637    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:00.687648    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:00.702279    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:00.702289    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:00.714102    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:00.714112    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:00.731441    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:00.731451    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:00.765326    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:00.765337    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:00.770085    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:00.770090    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:00.781512    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:00.781521    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:03.306607    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:08.506066    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:08.506088    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:08.308950    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:08.309187    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:08.327344    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:08.327449    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:08.340646    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:08.340726    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:08.352600    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:08.352674    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:08.364886    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:08.364969    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:08.375932    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:08.376014    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:08.391799    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:08.391877    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:08.402225    4051 logs.go:282] 0 containers: []
	W1014 07:45:08.402241    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:08.402311    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:08.412556    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:08.412579    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:08.412585    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:08.436227    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:08.436236    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:08.448153    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:08.448163    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:08.463078    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:08.463090    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:08.476235    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:08.476248    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:08.487888    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:08.487900    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:08.499693    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:08.499703    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:08.513911    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:08.513923    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:08.533964    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:08.533974    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:08.569297    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:08.569306    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:08.573453    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:08.573459    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:08.610304    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:08.610314    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:08.624516    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:08.624527    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:13.507785    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:13.507806    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:11.141904    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:18.509895    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:18.509919    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:16.144121    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:16.144304    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:16.157011    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:16.157101    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:16.167586    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:16.167660    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:16.177941    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:16.178007    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:16.188802    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:16.188865    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:16.199079    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:16.199157    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:16.210084    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:16.210158    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:16.220025    4051 logs.go:282] 0 containers: []
	W1014 07:45:16.220036    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:16.220107    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:16.230483    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:16.230500    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:16.230507    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:16.235554    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:16.235561    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:16.270627    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:16.270638    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:16.285153    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:16.285163    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:16.296940    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:16.296952    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:16.308904    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:16.308915    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:16.333532    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:16.333542    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:16.370595    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:16.370606    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:16.384785    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:16.384796    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:16.396771    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:16.396782    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:16.408196    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:16.408207    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:16.426200    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:16.426210    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:16.443554    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:16.443564    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:18.957162    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:23.512091    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:23.512221    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:23.528257    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:45:23.528339    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:23.539089    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:45:23.539166    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:23.556400    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:45:23.556487    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:23.566771    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:45:23.566847    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:23.577246    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:45:23.577331    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:23.588179    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:45:23.588258    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:23.598711    4105 logs.go:282] 0 containers: []
	W1014 07:45:23.598728    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:23.598795    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:23.609459    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:45:23.609474    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:23.609479    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:23.614284    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:23.614290    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:23.650720    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:45:23.650735    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:45:23.663209    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:45:23.663220    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:45:23.674903    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:45:23.674917    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:45:23.686343    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:23.686359    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:23.711447    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:45:23.711454    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:23.723007    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:23.723020    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:23.762748    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:45:23.762759    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:45:23.777769    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:45:23.777781    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:45:23.792771    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:45:23.792782    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:45:23.804235    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:45:23.804248    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:45:23.831718    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:45:23.831732    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:45:23.959339    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:23.959438    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:23.971321    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:23.971398    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:23.983018    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:23.983091    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:23.995086    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:23.995166    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:24.007099    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:24.007177    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:24.017937    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:24.018015    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:24.029086    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:24.029161    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:24.040461    4051 logs.go:282] 0 containers: []
	W1014 07:45:24.040474    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:24.040538    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:24.055086    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:24.055102    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:24.055107    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:24.092229    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:24.092236    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:24.096694    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:24.096701    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:24.135963    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:24.135974    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:24.151056    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:24.151066    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:24.169817    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:24.169827    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:24.181872    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:24.181883    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:24.196491    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:24.196502    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:24.210189    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:24.210200    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:24.222308    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:24.222318    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:24.242117    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:24.242126    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:24.255143    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:24.255152    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:24.278679    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:24.278692    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:26.350974    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:26.792901    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:31.353355    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:31.353596    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:31.376937    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:45:31.377037    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:31.390938    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:45:31.391028    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:31.403450    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:45:31.403537    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:31.414199    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:45:31.414448    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:31.424892    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:45:31.424962    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:31.440354    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:45:31.440417    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:31.450140    4105 logs.go:282] 0 containers: []
	W1014 07:45:31.450149    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:31.450205    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:31.464867    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:45:31.464887    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:31.464891    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:31.489903    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:31.489910    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:31.527448    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:31.527456    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:31.531671    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:31.531678    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:31.567147    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:45:31.567158    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:45:31.581897    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:45:31.581910    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:45:31.594855    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:45:31.594867    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:45:31.610357    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:45:31.610372    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:45:31.627976    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:45:31.627986    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:45:31.642336    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:45:31.642352    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:45:31.657406    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:45:31.657416    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:45:31.673584    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:45:31.673599    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:45:31.685064    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:45:31.685079    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:31.795129    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:31.795228    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:31.806768    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:31.806837    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:31.818126    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:31.818209    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:31.830340    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:31.830412    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:31.842419    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:31.842495    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:31.854176    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:31.854270    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:31.865705    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:31.865787    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:31.876622    4051 logs.go:282] 0 containers: []
	W1014 07:45:31.876635    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:31.876704    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:31.887262    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:31.887278    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:31.887283    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:31.899602    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:31.899611    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:31.915520    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:31.915536    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:31.927755    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:31.927765    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:31.950845    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:31.950853    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:31.985171    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:31.985179    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:31.989489    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:31.989494    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:32.031130    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:32.031141    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:32.045997    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:32.046012    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:32.058682    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:32.058696    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:32.074190    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:32.074201    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:32.086646    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:32.086657    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:32.105027    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:32.105040    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:34.619644    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:34.198510    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:39.621821    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:39.621920    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:39.637441    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:39.637522    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:39.648535    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:39.648616    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:39.659997    4051 logs.go:282] 2 containers: [ec14ed534d2b a7d107d169c1]
	I1014 07:45:39.660109    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:39.674449    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:39.674519    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:39.694919    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:39.694992    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:39.705761    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:39.705839    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:39.716219    4051 logs.go:282] 0 containers: []
	W1014 07:45:39.716230    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:39.716296    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:39.727397    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:39.727412    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:39.727418    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:39.745502    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:39.745511    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:39.759272    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:39.759283    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:39.785332    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:39.785340    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:39.821171    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:39.821186    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:39.826260    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:39.826268    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:39.838535    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:39.838547    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:39.855389    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:39.855400    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:39.871490    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:39.871501    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:39.883724    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:39.883733    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:39.896544    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:39.896555    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:39.932702    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:39.932713    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:39.948052    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:39.948062    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:39.200842    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:39.201009    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:39.212458    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:45:39.212547    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:39.223235    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:45:39.223317    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:39.237179    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:45:39.237260    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:39.247597    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:45:39.247675    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:39.257746    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:45:39.257836    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:39.268378    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:45:39.268448    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:39.278665    4105 logs.go:282] 0 containers: []
	W1014 07:45:39.278683    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:39.278749    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:39.289397    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:45:39.289412    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:45:39.289418    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:39.301545    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:39.301556    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:39.340713    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:39.340726    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:39.376927    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:45:39.376942    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:45:39.388781    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:45:39.388793    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:45:39.400157    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:45:39.400171    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:45:39.412031    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:45:39.412043    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:45:39.429761    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:39.429773    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:39.453501    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:39.453509    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:39.457936    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:45:39.457944    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:45:39.471976    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:45:39.471987    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:45:39.488404    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:45:39.488414    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:45:39.503535    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:45:39.503546    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:45:42.016853    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:42.464408    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:47.019159    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:47.019358    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:47.032554    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:45:47.032649    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:47.043908    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:45:47.043994    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:47.054703    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:45:47.054779    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:47.070476    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:45:47.070564    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:47.081178    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:45:47.081255    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:47.092369    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:45:47.092449    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:47.103566    4105 logs.go:282] 0 containers: []
	W1014 07:45:47.103579    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:47.103650    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:47.114386    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:45:47.114405    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:45:47.114410    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:45:47.126832    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:45:47.126844    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:45:47.138509    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:47.138518    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:47.164159    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:45:47.164176    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:47.180228    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:47.180240    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:47.216933    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:45:47.216945    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:45:47.231413    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:45:47.231424    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:45:47.245619    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:45:47.245627    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:45:47.257746    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:45:47.257757    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:45:47.273113    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:45:47.273125    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:45:47.291037    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:47.291045    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:47.295450    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:47.295456    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:47.332750    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:45:47.332762    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:45:47.466700    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:47.466883    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:47.478839    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:47.478920    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:47.490810    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:47.490885    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:47.502390    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:45:47.502471    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:47.513319    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:47.513408    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:47.524354    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:47.524431    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:47.535375    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:47.535449    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:47.546448    4051 logs.go:282] 0 containers: []
	W1014 07:45:47.546468    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:47.546529    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:47.558348    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:47.558368    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:47.558375    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:47.576731    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:47.576746    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:47.591814    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:47.591824    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:47.606414    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:47.606425    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:47.622217    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:45:47.622227    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:45:47.634690    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:47.634706    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:47.646705    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:47.646715    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:47.683035    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:45:47.683045    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:45:47.698002    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:47.698015    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:47.710858    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:47.710868    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:47.729283    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:47.729293    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:47.742214    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:47.742223    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:47.778795    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:47.778805    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:47.783612    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:47.783620    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:47.795838    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:47.795851    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:50.320701    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:49.851746    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:55.322924    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:55.323088    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:55.337256    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:45:55.337347    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:55.348267    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:45:55.348344    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:55.358766    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:45:55.358838    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:55.369086    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:45:55.369169    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:55.379448    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:45:55.379524    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:55.392976    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:45:55.393057    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:55.403293    4051 logs.go:282] 0 containers: []
	W1014 07:45:55.403305    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:55.403375    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:55.414107    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:45:55.414126    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:45:55.414132    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:45:55.433170    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:55.433180    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:55.467812    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:55.467830    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:55.503425    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:45:55.503440    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:45:55.516226    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:45:55.516239    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:45:55.531741    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:45:55.531751    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:45:55.543781    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:55.543791    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:55.548232    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:45:55.548240    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:45:54.854002    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:54.854132    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:54.866957    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:45:54.867045    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:54.878113    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:45:54.878197    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:54.888759    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:45:54.888846    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:54.899277    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:45:54.899355    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:54.909977    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:45:54.910058    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:54.920753    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:45:54.920825    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:54.930900    4105 logs.go:282] 0 containers: []
	W1014 07:45:54.930910    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:54.930972    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:54.941776    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:45:54.941794    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:54.941800    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:54.981746    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:54.981755    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:55.018173    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:45:55.018187    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:45:55.032588    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:45:55.032600    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:45:55.044094    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:45:55.044104    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:45:55.063321    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:45:55.063333    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:45:55.076041    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:55.076056    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:55.099822    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:45:55.099836    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:55.111486    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:55.111500    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:55.115595    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:45:55.115600    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:45:55.130121    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:45:55.130138    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:45:55.145369    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:45:55.145383    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:45:55.167200    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:45:55.167212    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:45:57.683964    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:55.562688    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:45:55.562698    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:45:55.574858    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:45:55.574868    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:55.586469    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:45:55.586480    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:45:55.600401    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:45:55.600412    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:45:55.615960    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:55.615971    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:55.640229    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:45:55.640239    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:45:55.652567    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:45:55.652577    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:45:58.166635    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:02.686132    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:02.686241    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:02.701702    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:02.701793    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:02.712017    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:02.712100    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:02.722584    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:02.722655    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:02.733616    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:02.733692    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:02.744059    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:02.744133    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:02.754806    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:02.754878    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:02.764967    4105 logs.go:282] 0 containers: []
	W1014 07:46:02.764979    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:02.765051    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:02.775568    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:02.775589    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:02.775594    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:02.787813    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:02.787826    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:02.800550    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:02.800564    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:02.817813    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:02.817825    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:02.841327    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:02.841336    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:02.882173    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:02.882188    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:02.894684    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:02.894697    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:02.908636    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:02.908646    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:02.922731    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:02.922764    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:02.938527    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:02.938544    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:02.951912    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:02.951927    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:02.963503    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:02.963514    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:03.003435    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:03.003444    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:03.168819    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:03.168979    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:03.183721    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:03.183802    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:03.196107    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:03.196186    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:03.206977    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:03.207054    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:03.217237    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:03.217316    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:03.227485    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:03.227562    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:03.238683    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:03.238753    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:03.250080    4051 logs.go:282] 0 containers: []
	W1014 07:46:03.250094    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:03.250165    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:03.262040    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:03.262056    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:03.262062    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:03.276223    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:03.276236    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:03.287721    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:03.287733    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:03.320834    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:03.320842    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:03.356367    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:03.356378    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:03.374217    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:03.374227    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:03.393326    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:03.393339    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:03.408171    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:03.408182    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:03.420049    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:03.420059    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:03.443124    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:03.443131    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:03.454574    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:03.454584    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:03.468932    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:03.468942    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:03.481903    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:03.481913    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:03.494039    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:03.494052    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:03.498814    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:03.498820    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:05.509580    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:06.012705    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:10.511765    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:10.511884    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:10.523040    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:10.523144    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:10.534721    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:10.534801    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:10.545369    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:10.545453    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:10.556457    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:10.556536    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:10.566314    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:10.566390    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:10.576299    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:10.576380    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:10.589999    4105 logs.go:282] 0 containers: []
	W1014 07:46:10.590013    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:10.590079    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:10.600758    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:10.600774    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:10.600779    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:10.612357    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:10.612370    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:10.628114    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:10.628124    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:10.645837    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:10.645848    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:10.658237    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:10.658249    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:10.698628    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:10.698642    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:10.713365    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:10.713377    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:10.727451    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:10.727462    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:10.746245    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:10.746256    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:10.758218    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:10.758229    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:10.762693    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:10.762699    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:10.822949    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:10.822964    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:10.835085    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:10.835097    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:13.360790    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:11.015010    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:11.015154    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:11.026701    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:11.026789    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:11.037559    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:11.037635    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:11.048142    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:11.048226    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:11.059315    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:11.059390    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:11.069604    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:11.069688    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:11.079998    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:11.080072    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:11.089703    4051 logs.go:282] 0 containers: []
	W1014 07:46:11.089719    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:11.089780    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:11.100671    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:11.100688    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:11.100694    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:11.136494    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:11.136501    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:11.152407    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:11.152421    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:11.177097    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:11.177109    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:11.188821    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:11.188832    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:11.201361    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:11.201375    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:11.219761    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:11.219778    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:11.231316    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:11.231326    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:11.235853    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:11.235858    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:11.270498    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:11.270508    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:11.296665    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:11.296675    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:11.312098    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:11.312108    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:11.326199    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:11.326209    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:11.340061    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:11.340071    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:11.352133    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:11.352143    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:13.871166    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:18.362996    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:18.363205    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:18.377728    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:18.377811    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:18.388158    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:18.388241    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:18.399222    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:18.399296    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:18.409674    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:18.409753    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:18.424272    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:18.424351    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:18.435296    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:18.435374    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:18.445666    4105 logs.go:282] 0 containers: []
	W1014 07:46:18.445676    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:18.445738    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:18.455768    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:18.455784    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:18.455790    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:18.490924    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:18.490939    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:18.509785    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:18.509795    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:18.521607    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:18.521618    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:18.547569    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:18.547580    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:18.559043    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:18.559057    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:18.597749    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:18.597759    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:18.602279    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:18.602287    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:18.613929    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:18.613940    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:18.625535    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:18.625546    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:18.644495    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:18.644506    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:18.656151    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:18.656162    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:18.671148    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:18.671160    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:18.873300    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:18.873475    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:18.885179    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:18.885260    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:18.900860    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:18.900941    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:18.911794    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:18.911873    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:18.922344    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:18.922430    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:18.933004    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:18.933084    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:18.943504    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:18.943579    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:18.954098    4051 logs.go:282] 0 containers: []
	W1014 07:46:18.954109    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:18.954169    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:18.964839    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:18.964855    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:18.964861    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:18.977130    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:18.977143    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:18.989006    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:18.989018    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:19.006519    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:19.006529    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:19.018297    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:19.018309    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:19.044566    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:19.044576    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:19.049315    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:19.049323    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:19.084628    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:19.084642    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:19.096302    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:19.096313    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:19.107652    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:19.107666    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:19.124444    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:19.124457    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:19.136399    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:19.136411    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:19.148884    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:19.148894    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:19.185258    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:19.185279    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:19.200030    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:19.200040    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:21.187268    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:21.717289    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:26.189439    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:26.189641    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:26.203524    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:26.203614    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:26.214219    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:26.214294    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:26.224912    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:26.224986    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:26.235078    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:26.235158    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:26.245336    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:26.245419    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:26.255564    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:26.255638    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:26.266127    4105 logs.go:282] 0 containers: []
	W1014 07:46:26.266142    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:26.266203    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:26.276967    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:26.276983    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:26.276989    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:26.312400    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:26.312410    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:26.331020    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:26.331033    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:26.345672    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:26.345682    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:26.357251    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:26.357262    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:26.369296    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:26.369307    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:26.384393    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:26.384405    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:26.396426    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:26.396437    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:26.433036    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:26.433046    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:26.437286    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:26.437292    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:26.454714    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:26.454729    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:26.466730    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:26.466743    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:26.490997    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:26.491007    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:29.005441    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:26.719451    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:26.719593    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:26.732406    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:26.732495    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:26.743417    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:26.743499    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:26.758556    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:26.758644    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:26.770222    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:26.770303    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:26.781229    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:26.781309    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:26.795565    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:26.795646    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:26.806277    4051 logs.go:282] 0 containers: []
	W1014 07:46:26.806291    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:26.806361    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:26.816898    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:26.816916    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:26.816921    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:26.827995    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:26.828007    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:26.846307    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:26.846316    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:26.858374    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:26.858385    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:26.870025    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:26.870037    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:26.874642    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:26.874649    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:26.888863    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:26.888874    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:26.900505    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:26.900515    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:26.915973    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:26.915983    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:26.940775    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:26.940782    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:26.976769    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:26.976780    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:26.988419    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:26.988429    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:27.003602    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:27.003614    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:27.020661    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:27.020672    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:27.036252    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:27.036263    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:29.573468    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:34.007177    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:34.007402    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:34.022044    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:34.022140    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:34.034063    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:34.034146    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:34.045096    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:34.045173    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:34.575647    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:34.575811    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:34.586652    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:34.586734    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:34.600370    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:34.600449    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:34.610708    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:34.610783    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:34.621062    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:34.621136    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:34.631668    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:34.631751    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:34.642533    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:34.642616    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:34.652432    4051 logs.go:282] 0 containers: []
	W1014 07:46:34.652445    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:34.652510    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:34.663068    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:34.663083    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:34.663089    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:34.674684    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:34.674696    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:34.708444    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:34.708455    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:34.726485    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:34.726496    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:34.740890    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:34.740899    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:34.754274    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:34.754288    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:34.766312    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:34.766322    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:34.783923    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:34.783935    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:34.812462    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:34.812480    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:34.817186    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:34.817196    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:34.853942    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:34.853953    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:34.865744    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:34.865756    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:34.877954    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:34.877965    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:34.893552    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:34.893562    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:34.906149    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:34.906161    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:34.056161    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:34.056241    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:34.066343    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:34.066425    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:34.077209    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:34.077289    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:34.087130    4105 logs.go:282] 0 containers: []
	W1014 07:46:34.087144    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:34.087206    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:34.097294    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:34.097315    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:34.097321    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:34.132918    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:34.132930    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:34.147177    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:34.147186    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:34.159320    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:34.159333    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:34.197037    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:34.197050    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:34.211268    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:34.211279    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:34.228354    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:34.228364    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:34.246037    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:34.246053    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:34.262174    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:34.262191    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:34.299013    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:34.299024    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:34.328699    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:34.328711    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:34.354466    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:34.354486    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:34.363514    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:34.363528    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:36.896696    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:37.420128    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:41.898850    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:41.899127    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:41.919343    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:41.919448    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:41.932731    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:41.932806    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:41.944452    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:41.944543    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:41.955075    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:41.955163    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:41.965417    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:41.965492    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:41.975568    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:41.975636    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:41.986357    4105 logs.go:282] 0 containers: []
	W1014 07:46:41.986373    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:41.986439    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:41.996849    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:41.996867    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:41.996872    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:42.001428    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:46:42.001447    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:46:42.013898    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:46:42.013909    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:46:42.025326    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:42.025336    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:42.037903    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:42.037914    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:42.049265    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:42.049277    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:42.070826    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:42.070841    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:42.108336    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:42.108347    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:42.122712    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:42.122722    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:42.134466    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:42.134478    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:42.159444    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:42.159458    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:42.195975    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:42.195987    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:42.208142    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:42.208153    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:42.223335    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:42.223346    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:42.235704    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:42.235717    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:42.421848    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:42.422072    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:42.436142    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:42.436231    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:42.451551    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:42.451629    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:42.462451    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:42.462525    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:42.473994    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:42.474066    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:42.487374    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:42.487454    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:42.498187    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:42.498269    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:42.508973    4051 logs.go:282] 0 containers: []
	W1014 07:46:42.508987    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:42.509054    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:42.520326    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:42.520347    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:42.520353    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:42.536679    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:42.536689    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:42.548456    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:42.548468    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:42.573595    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:42.573606    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:42.609010    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:42.609022    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:42.613969    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:42.613976    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:42.629244    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:42.629257    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:42.644897    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:42.644907    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:42.656652    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:42.656661    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:42.691814    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:42.691827    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:42.705179    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:42.705190    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:42.717482    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:42.717497    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:42.729519    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:42.729529    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:42.745399    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:42.745410    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:42.757717    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:42.757727    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:45.278276    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:44.752329    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:50.280544    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:50.280690    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:50.291709    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:50.291785    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:50.302236    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:50.302305    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:50.313183    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:50.313266    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:50.327492    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:50.327579    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:50.338366    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:50.338444    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:50.349244    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:50.349325    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:50.359482    4051 logs.go:282] 0 containers: []
	W1014 07:46:50.359496    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:50.359564    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:50.370305    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:50.370321    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:50.370326    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:50.406722    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:50.406755    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:50.419033    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:50.419046    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:50.430956    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:50.430966    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:50.442518    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:50.442530    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:50.460534    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:50.460547    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:50.472344    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:50.472354    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:50.497278    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:50.497287    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:50.511308    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:50.511324    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:46:50.528340    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:50.528350    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:50.543452    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:50.543463    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:50.547822    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:50.547830    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:49.754482    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:49.754635    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:49.766968    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:49.767064    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:49.777971    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:49.778047    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:49.789201    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:49.789286    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:49.799866    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:49.799943    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:49.810208    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:49.810274    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:49.825158    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:49.825234    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:49.836351    4105 logs.go:282] 0 containers: []
	W1014 07:46:49.836367    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:49.836432    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:49.847470    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:49.847490    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:46:49.847496    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:46:49.861618    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:49.861628    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:49.873792    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:49.873803    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:49.885954    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:49.885964    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:49.909552    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:46:49.909562    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:46:49.921068    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:49.921081    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:49.932807    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:49.932818    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:49.944743    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:49.944755    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:49.949500    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:49.949507    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:49.988637    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:49.988648    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:50.007197    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:50.007207    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:50.032056    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:50.032067    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:50.043623    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:50.043636    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:50.081917    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:50.081926    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:50.095682    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:50.095694    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:52.612900    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:50.583466    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:50.583477    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:50.598267    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:50.598277    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:50.609810    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:50.609821    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:53.123774    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:57.615140    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:57.615339    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:57.627730    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:57.627811    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:57.638249    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:57.638335    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:57.649505    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:57.649587    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:57.660342    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:57.660428    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:57.670850    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:57.670931    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:57.681651    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:57.681736    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:57.692191    4105 logs.go:282] 0 containers: []
	W1014 07:46:57.692201    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:57.692263    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:57.702421    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:57.702437    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:57.702442    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:57.727850    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:57.727867    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:57.763340    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:57.763353    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:57.777471    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:57.777482    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:57.791883    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:46:57.791895    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:46:57.804817    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:57.804830    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:57.816275    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:57.816288    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:57.827882    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:57.827892    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:57.845949    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:57.845959    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:57.859181    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:57.859191    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:57.870583    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:57.870597    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:57.909344    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:57.909355    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:57.914251    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:46:57.914260    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:46:57.928027    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:57.928039    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:57.942711    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:57.942724    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:58.125937    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:58.126050    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:58.136897    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:46:58.136980    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:58.146965    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:46:58.147054    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:58.157562    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:46:58.157649    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:58.168661    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:46:58.168740    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:58.179544    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:46:58.179618    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:58.190397    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:46:58.190468    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:58.200579    4051 logs.go:282] 0 containers: []
	W1014 07:46:58.200597    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:58.200663    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:58.212087    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:46:58.212104    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:46:58.212109    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:46:58.223651    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:46:58.223660    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:46:58.244707    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:46:58.244717    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:46:58.256346    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:58.256357    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:58.281865    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:46:58.281876    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:46:58.296671    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:46:58.296680    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:46:58.308926    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:46:58.308944    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:46:58.320576    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:46:58.320589    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:46:58.336331    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:46:58.336343    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:58.348397    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:58.348408    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:58.383835    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:46:58.383845    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:46:58.397621    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:46:58.397632    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:46:58.409540    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:58.409553    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:58.445337    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:58.445348    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:58.450327    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:46:58.450333    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:00.456289    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:00.968108    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:05.458520    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:05.458739    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:05.472091    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:05.472182    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:05.489447    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:05.489525    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:05.500493    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:05.500577    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:05.515546    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:05.515627    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:05.528789    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:05.528869    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:05.539443    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:05.539517    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:05.550993    4105 logs.go:282] 0 containers: []
	W1014 07:47:05.551005    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:05.551068    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:05.561917    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:05.561933    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:05.561939    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:05.574153    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:05.574165    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:05.588915    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:05.588926    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:05.623615    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:05.623628    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:05.640208    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:05.640218    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:05.653039    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:05.653048    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:05.678538    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:05.678546    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:05.690858    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:05.690872    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:05.730481    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:05.730493    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:05.734876    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:05.734884    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:05.747050    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:05.747062    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:05.760508    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:05.760521    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:05.775404    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:05.775414    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:05.793422    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:05.793433    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:05.807519    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:05.807532    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:08.320310    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:05.970240    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:05.970346    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:05.981555    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:47:05.981650    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:05.992318    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:47:05.992404    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:06.002964    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:47:06.003046    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:06.013720    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:47:06.013798    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:06.023931    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:47:06.024010    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:06.034704    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:47:06.034778    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:06.045267    4051 logs.go:282] 0 containers: []
	W1014 07:47:06.045277    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:06.045341    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:06.055808    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:47:06.055825    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:47:06.055831    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:47:06.067241    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:06.067251    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:06.071635    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:06.071644    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:06.108492    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:47:06.108503    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:47:06.122391    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:47:06.122401    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:47:06.139642    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:47:06.139654    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:06.153779    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:06.153789    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:06.189339    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:47:06.189348    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:47:06.204092    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:47:06.204102    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:47:06.220008    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:47:06.220018    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:47:06.231516    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:47:06.231526    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:47:06.247252    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:47:06.247263    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:06.261856    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:47:06.261865    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:47:06.274127    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:47:06.274138    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:47:06.289733    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:06.289744    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:08.817148    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:13.322503    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:13.322664    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:13.336174    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:13.336261    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:13.354029    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:13.354107    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:13.364980    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:13.365066    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:13.375920    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:13.375998    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:13.386340    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:13.386415    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:13.396749    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:13.396835    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:13.407530    4105 logs.go:282] 0 containers: []
	W1014 07:47:13.407541    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:13.407611    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:13.417910    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:13.417928    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:13.417936    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:13.429941    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:13.429953    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:13.441652    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:13.441670    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:13.455791    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:13.455802    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:13.493782    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:13.493794    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:13.507751    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:13.507765    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:13.542490    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:13.542504    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:13.554881    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:13.554893    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:13.567050    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:13.567066    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:13.590991    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:13.590998    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:13.595135    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:13.595141    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:13.614212    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:13.614223    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:13.626080    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:13.626092    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:13.639319    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:13.639331    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:13.654319    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:13.654329    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:13.819372    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:13.819475    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:13.830514    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:47:13.830597    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:13.841134    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:47:13.841210    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:13.851849    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:47:13.851941    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:13.862736    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:47:13.862814    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:13.873153    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:47:13.873232    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:13.883973    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:47:13.884058    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:13.894026    4051 logs.go:282] 0 containers: []
	W1014 07:47:13.894037    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:13.894101    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:13.905153    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:47:13.905170    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:47:13.905176    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:47:13.917558    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:47:13.917569    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:47:13.933658    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:47:13.933669    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:47:13.948114    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:47:13.948126    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:47:13.965786    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:13.965796    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:14.002243    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:14.002257    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:14.006712    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:14.006719    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:14.031552    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:47:14.031571    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:14.043480    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:47:14.043493    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:47:14.055400    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:47:14.055411    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:14.070815    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:14.070825    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:14.109952    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:47:14.109965    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:47:14.126490    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:47:14.126501    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:47:14.141575    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:47:14.141584    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:47:14.153442    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:47:14.153452    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:47:16.180330    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:16.667075    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:21.182602    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:21.182784    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:21.194176    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:21.194254    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:21.204528    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:21.204599    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:21.215499    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:21.215581    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:21.230953    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:21.231024    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:21.241712    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:21.241793    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:21.252216    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:21.252305    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:21.262919    4105 logs.go:282] 0 containers: []
	W1014 07:47:21.262930    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:21.262993    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:21.273489    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:21.273507    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:21.273514    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:21.308355    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:21.308369    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:21.319919    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:21.319930    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:21.338662    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:21.338672    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:21.375504    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:21.375513    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:21.398722    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:21.398733    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:21.410371    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:21.410381    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:21.414767    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:21.414774    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:21.425901    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:21.425913    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:21.443702    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:21.443713    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:21.455726    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:21.455737    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:21.467631    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:21.467641    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:21.493465    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:21.493473    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:21.505771    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:21.505785    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:21.520584    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:21.520594    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:24.036089    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:21.669188    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:21.669295    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:21.680609    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:47:21.680679    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:21.691145    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:47:21.691254    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:21.702092    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:47:21.702171    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:21.712199    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:47:21.712282    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:21.722772    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:47:21.722877    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:21.733317    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:47:21.733392    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:21.744009    4051 logs.go:282] 0 containers: []
	W1014 07:47:21.744021    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:21.744089    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:21.757299    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:47:21.757315    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:47:21.757320    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:21.772811    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:47:21.772823    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:47:21.790359    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:21.790371    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:21.815870    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:47:21.815880    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:47:21.830102    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:47:21.830112    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:47:21.842196    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:47:21.842209    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:47:21.854353    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:47:21.854363    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:47:21.866836    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:47:21.866849    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:47:21.881317    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:47:21.881328    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:47:21.897177    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:47:21.897187    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:47:21.909018    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:47:21.909033    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:21.920576    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:21.920591    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:21.955946    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:21.955957    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:21.961131    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:47:21.961139    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:47:21.977614    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:21.977626    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:24.513458    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:29.038346    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:29.038571    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:29.515624    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:29.515777    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:29.527144    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:47:29.527224    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:29.538284    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:47:29.538367    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:29.551663    4051 logs.go:282] 4 containers: [09d3ed4d75e8 fbe909541ee8 ec14ed534d2b a7d107d169c1]
	I1014 07:47:29.551743    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:29.562264    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:47:29.562341    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:29.572863    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:47:29.572941    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:29.590261    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:47:29.590341    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:29.600535    4051 logs.go:282] 0 containers: []
	W1014 07:47:29.600547    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:29.600618    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:29.611510    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:47:29.611530    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:47:29.611535    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:47:29.623408    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:47:29.623419    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:47:29.634670    4051 logs.go:123] Gathering logs for coredns [ec14ed534d2b] ...
	I1014 07:47:29.634680    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec14ed534d2b"
	I1014 07:47:29.646621    4051 logs.go:123] Gathering logs for coredns [a7d107d169c1] ...
	I1014 07:47:29.646632    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d107d169c1"
	I1014 07:47:29.658340    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:47:29.658350    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:47:29.675543    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:29.675556    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:29.698793    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:29.698800    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:29.732806    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:47:29.732819    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:29.748761    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:47:29.748771    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:47:29.763320    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:47:29.763330    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:47:29.775353    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:47:29.775364    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:47:29.787467    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:47:29.787478    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:29.800759    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:29.800769    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:29.836392    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:47:29.836400    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:47:29.851418    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:29.851430    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:29.054775    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:29.054870    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:29.067493    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:29.067573    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:29.078835    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:29.078925    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:29.094669    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:29.094752    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:29.105196    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:29.105276    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:29.116309    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:29.116377    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:29.126467    4105 logs.go:282] 0 containers: []
	W1014 07:47:29.126480    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:29.126546    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:29.137076    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:29.137094    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:29.137100    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:29.148715    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:29.148726    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:29.160870    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:29.160881    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:29.186440    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:29.186452    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:29.221730    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:29.221740    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:29.233975    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:29.233986    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:29.245910    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:29.245920    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:29.257644    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:29.257656    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:29.262300    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:29.262308    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:29.276570    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:29.276579    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:29.288425    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:29.288436    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:29.303528    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:29.303537    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:29.341800    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:29.341808    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:29.355641    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:29.355655    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:29.366802    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:29.366812    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:31.894897    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:32.358407    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:36.897127    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:36.897418    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:36.918500    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:36.918612    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:36.934590    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:36.934680    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:36.946420    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:36.946505    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:36.957536    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:36.957622    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:36.972634    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:36.972713    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:36.984061    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:36.984139    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:36.994525    4105 logs.go:282] 0 containers: []
	W1014 07:47:36.994537    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:36.994604    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:37.004944    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:37.004967    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:37.004973    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:37.017212    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:37.017224    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:37.035402    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:37.035413    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:37.073655    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:37.073666    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:37.090269    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:37.090281    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:37.105712    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:37.105722    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:37.123839    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:37.123850    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:37.136278    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:37.136290    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:37.147731    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:37.147745    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:37.169259    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:37.169270    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:37.195344    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:37.195354    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:37.199797    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:37.199805    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:37.235242    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:37.235253    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:37.250053    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:37.250064    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:37.263189    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:37.263199    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:37.360730    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:37.360838    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:37.372334    4051 logs.go:282] 1 containers: [1669a9fff277]
	I1014 07:47:37.372413    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:37.382685    4051 logs.go:282] 1 containers: [da93bbd580c1]
	I1014 07:47:37.382760    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:37.393459    4051 logs.go:282] 4 containers: [c3d009bc2ad8 468a0e63e316 09d3ed4d75e8 fbe909541ee8]
	I1014 07:47:37.393539    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:37.404715    4051 logs.go:282] 1 containers: [23162fe92abb]
	I1014 07:47:37.404795    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:37.415345    4051 logs.go:282] 1 containers: [4560176f7813]
	I1014 07:47:37.415419    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:37.425999    4051 logs.go:282] 1 containers: [4f224408549a]
	I1014 07:47:37.426071    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:37.436276    4051 logs.go:282] 0 containers: []
	W1014 07:47:37.436288    4051 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:37.436356    4051 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:37.446409    4051 logs.go:282] 1 containers: [b1f8eb243a9e]
	I1014 07:47:37.446428    4051 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:37.446434    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:37.482003    4051 logs.go:123] Gathering logs for storage-provisioner [b1f8eb243a9e] ...
	I1014 07:47:37.482014    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f8eb243a9e"
	I1014 07:47:37.493950    4051 logs.go:123] Gathering logs for etcd [da93bbd580c1] ...
	I1014 07:47:37.493961    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da93bbd580c1"
	I1014 07:47:37.507908    4051 logs.go:123] Gathering logs for coredns [09d3ed4d75e8] ...
	I1014 07:47:37.507918    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09d3ed4d75e8"
	I1014 07:47:37.520187    4051 logs.go:123] Gathering logs for coredns [fbe909541ee8] ...
	I1014 07:47:37.520197    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe909541ee8"
	I1014 07:47:37.531751    4051 logs.go:123] Gathering logs for kube-proxy [4560176f7813] ...
	I1014 07:47:37.531765    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4560176f7813"
	I1014 07:47:37.543219    4051 logs.go:123] Gathering logs for kube-controller-manager [4f224408549a] ...
	I1014 07:47:37.543230    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f224408549a"
	I1014 07:47:37.565095    4051 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:37.565105    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:37.589945    4051 logs.go:123] Gathering logs for container status ...
	I1014 07:47:37.589960    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:37.604280    4051 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:37.604294    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:37.608722    4051 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:37.608729    4051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:37.645511    4051 logs.go:123] Gathering logs for kube-apiserver [1669a9fff277] ...
	I1014 07:47:37.645524    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1669a9fff277"
	I1014 07:47:37.660237    4051 logs.go:123] Gathering logs for coredns [c3d009bc2ad8] ...
	I1014 07:47:37.660248    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d009bc2ad8"
	I1014 07:47:37.671660    4051 logs.go:123] Gathering logs for coredns [468a0e63e316] ...
	I1014 07:47:37.671672    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468a0e63e316"
	I1014 07:47:37.683198    4051 logs.go:123] Gathering logs for kube-scheduler [23162fe92abb] ...
	I1014 07:47:37.683210    4051 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23162fe92abb"
	I1014 07:47:40.205371    4051 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:39.776986    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:45.207577    4051 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:45.211970    4051 out.go:201] 
	W1014 07:47:45.216000    4051 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1014 07:47:45.216006    4051 out.go:270] * 
	W1014 07:47:45.216662    4051 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:47:45.226868    4051 out.go:201] 
	I1014 07:47:44.779167    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:44.779360    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:44.796573    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:44.796673    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:44.810487    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:44.810570    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:44.821908    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:44.821994    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:44.833032    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:44.833112    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:44.843719    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:44.843801    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:44.856903    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:44.856978    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:44.867648    4105 logs.go:282] 0 containers: []
	W1014 07:47:44.867659    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:44.867731    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:44.879277    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:44.879296    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:44.879302    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:44.894153    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:44.894164    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:44.906126    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:44.906136    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:44.922084    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:44.922096    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:44.939653    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:44.939663    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:44.951477    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:44.951488    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:44.987498    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:44.987508    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:45.000961    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:45.000972    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:45.013028    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:45.013040    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:45.024554    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:45.024564    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:45.029517    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:45.029524    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:45.043850    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:45.043865    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:45.080660    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:45.080668    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:45.092528    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:45.092541    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:45.105536    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:45.105547    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:47.631954    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:52.634186    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:52.634404    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:52.649468    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:52.649567    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:52.661862    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:52.661940    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:52.673191    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:52.673277    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:52.684780    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:52.684851    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:52.696155    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:52.696241    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:52.707496    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:52.707571    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:52.718571    4105 logs.go:282] 0 containers: []
	W1014 07:47:52.718581    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:52.718647    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:52.729172    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:52.729189    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:52.729195    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:52.766083    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:52.766098    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:52.778839    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:52.778853    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:52.791382    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:52.791392    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:52.806195    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:52.806207    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:52.817882    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:52.817895    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:52.832104    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:52.832115    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:52.847246    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:52.847255    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:52.862882    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:52.862895    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:52.900371    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:52.900385    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:52.912621    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:52.912633    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:52.931368    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:52.931379    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:52.956015    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:52.956026    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:52.960548    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:52.960554    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:52.975119    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:52.975134    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:55.490706    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-10-14 14:38:41 UTC, ends at Mon 2024-10-14 14:48:01 UTC. --
	Oct 14 14:47:38 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:38Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 14 14:47:43 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 14 14:47:45 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:45Z" level=error msg="ContainerStats resp: {0x40004dc800 linux}"
	Oct 14 14:47:45 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:45Z" level=error msg="ContainerStats resp: {0x40004dd780 linux}"
	Oct 14 14:47:46 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:46Z" level=error msg="ContainerStats resp: {0x40007b52c0 linux}"
	Oct 14 14:47:47 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:47Z" level=error msg="ContainerStats resp: {0x40004fc700 linux}"
	Oct 14 14:47:47 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:47Z" level=error msg="ContainerStats resp: {0x400009db80 linux}"
	Oct 14 14:47:47 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:47Z" level=error msg="ContainerStats resp: {0x40004fce00 linux}"
	Oct 14 14:47:47 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:47Z" level=error msg="ContainerStats resp: {0x40004fd340 linux}"
	Oct 14 14:47:47 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:47Z" level=error msg="ContainerStats resp: {0x40003a0280 linux}"
	Oct 14 14:47:47 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:47Z" level=error msg="ContainerStats resp: {0x40003a0cc0 linux}"
	Oct 14 14:47:47 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:47Z" level=error msg="ContainerStats resp: {0x40003a11c0 linux}"
	Oct 14 14:47:48 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:48Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 14 14:47:53 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:53Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 14 14:47:57 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:57Z" level=error msg="ContainerStats resp: {0x400096cd40 linux}"
	Oct 14 14:47:57 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:57Z" level=error msg="ContainerStats resp: {0x40004dd700 linux}"
	Oct 14 14:47:58 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 14 14:47:58 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:58Z" level=error msg="ContainerStats resp: {0x4000a01500 linux}"
	Oct 14 14:47:59 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:59Z" level=error msg="ContainerStats resp: {0x4000837480 linux}"
	Oct 14 14:47:59 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:59Z" level=error msg="ContainerStats resp: {0x4000837880 linux}"
	Oct 14 14:47:59 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:59Z" level=error msg="ContainerStats resp: {0x40001b5f00 linux}"
	Oct 14 14:47:59 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:59Z" level=error msg="ContainerStats resp: {0x40003a02c0 linux}"
	Oct 14 14:47:59 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:59Z" level=error msg="ContainerStats resp: {0x40003a07c0 linux}"
	Oct 14 14:47:59 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:59Z" level=error msg="ContainerStats resp: {0x40004fd540 linux}"
	Oct 14 14:47:59 running-upgrade-116000 cri-dockerd[3073]: time="2024-10-14T14:47:59Z" level=error msg="ContainerStats resp: {0x40003a1380 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	c3d009bc2ad86       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   03bfb85413440
	468a0e63e316e       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   c7bd1183c4fcf
	09d3ed4d75e8d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   c7bd1183c4fcf
	fbe909541ee84       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   03bfb85413440
	4560176f78133       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   1bd3e93262d95
	b1f8eb243a9e3       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   18101895cb609
	da93bbd580c11       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   0b4afe28eb18f
	23162fe92abbb       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   959148ab8cfbd
	1669a9fff2775       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   e5580eb49c5b9
	4f224408549a9       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   3ca296247e2c1
	
	
	==> coredns [09d3ed4d75e8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3254342669026515864.1947461784315233810. HINFO: read udp 10.244.0.2:47945->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3254342669026515864.1947461784315233810. HINFO: read udp 10.244.0.2:37390->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3254342669026515864.1947461784315233810. HINFO: read udp 10.244.0.2:60523->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3254342669026515864.1947461784315233810. HINFO: read udp 10.244.0.2:36438->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3254342669026515864.1947461784315233810. HINFO: read udp 10.244.0.2:37367->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3254342669026515864.1947461784315233810. HINFO: read udp 10.244.0.2:52210->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3254342669026515864.1947461784315233810. HINFO: read udp 10.244.0.2:54447->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3254342669026515864.1947461784315233810. HINFO: read udp 10.244.0.2:53811->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3254342669026515864.1947461784315233810. HINFO: read udp 10.244.0.2:43359->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3254342669026515864.1947461784315233810. HINFO: read udp 10.244.0.2:41232->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [468a0e63e316] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1982414302977683015.914951995046813148. HINFO: read udp 10.244.0.2:49069->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1982414302977683015.914951995046813148. HINFO: read udp 10.244.0.2:38415->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1982414302977683015.914951995046813148. HINFO: read udp 10.244.0.2:41396->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1982414302977683015.914951995046813148. HINFO: read udp 10.244.0.2:42213->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1982414302977683015.914951995046813148. HINFO: read udp 10.244.0.2:41693->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1982414302977683015.914951995046813148. HINFO: read udp 10.244.0.2:37478->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1982414302977683015.914951995046813148. HINFO: read udp 10.244.0.2:35365->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c3d009bc2ad8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3124432780538968791.2331242904006457841. HINFO: read udp 10.244.0.3:58057->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3124432780538968791.2331242904006457841. HINFO: read udp 10.244.0.3:33924->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3124432780538968791.2331242904006457841. HINFO: read udp 10.244.0.3:52502->10.0.2.3:53: i/o timeout
	
	
	==> coredns [fbe909541ee8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3865989521194679328.1487999347112785971. HINFO: read udp 10.244.0.3:41465->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3865989521194679328.1487999347112785971. HINFO: read udp 10.244.0.3:33951->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3865989521194679328.1487999347112785971. HINFO: read udp 10.244.0.3:41281->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3865989521194679328.1487999347112785971. HINFO: read udp 10.244.0.3:39455->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3865989521194679328.1487999347112785971. HINFO: read udp 10.244.0.3:52706->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3865989521194679328.1487999347112785971. HINFO: read udp 10.244.0.3:37235->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3865989521194679328.1487999347112785971. HINFO: read udp 10.244.0.3:59341->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3865989521194679328.1487999347112785971. HINFO: read udp 10.244.0.3:44754->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3865989521194679328.1487999347112785971. HINFO: read udp 10.244.0.3:37702->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3865989521194679328.1487999347112785971. HINFO: read udp 10.244.0.3:47790->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-116000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-116000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=running-upgrade-116000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T07_43_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:43:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-116000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:48:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:43:44 +0000   Mon, 14 Oct 2024 14:43:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:43:44 +0000   Mon, 14 Oct 2024 14:43:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:43:44 +0000   Mon, 14 Oct 2024 14:43:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:43:44 +0000   Mon, 14 Oct 2024 14:43:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-116000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 002177a049044ca79e1916fd5c9ef346
	  System UUID:                002177a049044ca79e1916fd5c9ef346
	  Boot ID:                    c079651d-1af6-499e-af60-17cc39813f94
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-f5knc                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 coredns-6d4b75cb6d-tpl7d                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-116000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-116000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-116000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-fmmcp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-116000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-116000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-116000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-116000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-116000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-116000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-116000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-116000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-116000 event: Registered Node running-upgrade-116000 in Controller
	
	
	==> dmesg <==
	[  +1.835979] systemd-fstab-generator[829]: Ignoring "noauto" for root device
	[  +0.078844] systemd-fstab-generator[840]: Ignoring "noauto" for root device
	[  +0.075532] systemd-fstab-generator[851]: Ignoring "noauto" for root device
	[  +0.172336] systemd-fstab-generator[1001]: Ignoring "noauto" for root device
	[  +0.073607] systemd-fstab-generator[1012]: Ignoring "noauto" for root device
	[  +2.625056] systemd-fstab-generator[1292]: Ignoring "noauto" for root device
	[  +0.241695] kauditd_printk_skb: 92 callbacks suppressed
	[Oct14 14:39] systemd-fstab-generator[1932]: Ignoring "noauto" for root device
	[  +5.584423] systemd-fstab-generator[2223]: Ignoring "noauto" for root device
	[  +0.170094] systemd-fstab-generator[2257]: Ignoring "noauto" for root device
	[  +0.099187] systemd-fstab-generator[2268]: Ignoring "noauto" for root device
	[  +0.113057] systemd-fstab-generator[2281]: Ignoring "noauto" for root device
	[ +12.624227] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.217209] systemd-fstab-generator[3028]: Ignoring "noauto" for root device
	[  +0.088955] systemd-fstab-generator[3041]: Ignoring "noauto" for root device
	[  +0.082170] systemd-fstab-generator[3052]: Ignoring "noauto" for root device
	[  +0.098447] systemd-fstab-generator[3066]: Ignoring "noauto" for root device
	[  +2.496044] systemd-fstab-generator[3218]: Ignoring "noauto" for root device
	[  +2.386956] systemd-fstab-generator[3594]: Ignoring "noauto" for root device
	[  +1.441430] systemd-fstab-generator[4039]: Ignoring "noauto" for root device
	[ +19.251909] kauditd_printk_skb: 68 callbacks suppressed
	[Oct14 14:43] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.566695] systemd-fstab-generator[12233]: Ignoring "noauto" for root device
	[  +5.134978] systemd-fstab-generator[12817]: Ignoring "noauto" for root device
	[  +0.480069] systemd-fstab-generator[12951]: Ignoring "noauto" for root device
	
	
	==> etcd [da93bbd580c1] <==
	{"level":"info","ts":"2024-10-14T14:43:40.418Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-10-14T14:43:40.422Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-14T14:43:40.422Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-14T14:43:40.422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-14T14:43:40.422Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-14T14:43:40.422Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-14T14:43:40.422Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-14T14:43:40.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-14T14:43:40.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-14T14:43:40.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-14T14:43:40.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-14T14:43:40.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-14T14:43:40.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-14T14:43:40.606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-14T14:43:40.606Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:43:40.610Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:43:40.610Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:43:40.610Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-116000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T14:43:40.610Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:43:40.611Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-14T14:43:40.611Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:43:40.611Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T14:43:40.611Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T14:43:40.611Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T14:43:40.610Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 14:48:01 up 9 min,  0 users,  load average: 0.22, 0.27, 0.16
	Linux running-upgrade-116000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [1669a9fff277] <==
	I1014 14:43:42.134298       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1014 14:43:42.139426       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1014 14:43:42.141587       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1014 14:43:42.141639       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1014 14:43:42.142750       1 cache.go:39] Caches are synced for autoregister controller
	I1014 14:43:42.142878       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 14:43:42.160273       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1014 14:43:42.884877       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1014 14:43:43.044176       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 14:43:43.046965       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 14:43:43.046989       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:43:43.172156       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:43:43.182008       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:43:43.204558       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1014 14:43:43.206474       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1014 14:43:43.206839       1 controller.go:611] quota admission added evaluator for: endpoints
	I1014 14:43:43.208119       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:43:44.178830       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1014 14:43:44.488297       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1014 14:43:44.491265       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1014 14:43:44.495522       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1014 14:43:44.551006       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:43:57.683812       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1014 14:43:57.933995       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1014 14:43:58.214062       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [4f224408549a] <==
	I1014 14:43:57.029202       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 14:43:57.029223       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 14:43:57.029233       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 14:43:57.029285       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 14:43:57.031676       1 shared_informer.go:262] Caches are synced for endpoint
	I1014 14:43:57.031861       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1014 14:43:57.031987       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1014 14:43:57.032218       1 shared_informer.go:262] Caches are synced for HPA
	I1014 14:43:57.032349       1 shared_informer.go:262] Caches are synced for ephemeral
	I1014 14:43:57.033377       1 shared_informer.go:262] Caches are synced for namespace
	I1014 14:43:57.033402       1 shared_informer.go:262] Caches are synced for TTL
	I1014 14:43:57.047312       1 shared_informer.go:262] Caches are synced for stateful set
	I1014 14:43:57.115800       1 shared_informer.go:262] Caches are synced for deployment
	I1014 14:43:57.132198       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1014 14:43:57.145309       1 shared_informer.go:262] Caches are synced for disruption
	I1014 14:43:57.145321       1 disruption.go:371] Sending events to api server.
	I1014 14:43:57.199528       1 shared_informer.go:262] Caches are synced for resource quota
	I1014 14:43:57.233869       1 shared_informer.go:262] Caches are synced for resource quota
	I1014 14:43:57.653265       1 shared_informer.go:262] Caches are synced for garbage collector
	I1014 14:43:57.687150       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fmmcp"
	I1014 14:43:57.704521       1 shared_informer.go:262] Caches are synced for garbage collector
	I1014 14:43:57.704529       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1014 14:43:57.937916       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1014 14:43:58.035619       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-f5knc"
	I1014 14:43:58.040374       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-tpl7d"
	
	
	==> kube-proxy [4560176f7813] <==
	I1014 14:43:58.201808       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1014 14:43:58.201834       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1014 14:43:58.201844       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1014 14:43:58.212063       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1014 14:43:58.212077       1 server_others.go:206] "Using iptables Proxier"
	I1014 14:43:58.212105       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1014 14:43:58.212268       1 server.go:661] "Version info" version="v1.24.1"
	I1014 14:43:58.212280       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:43:58.212590       1 config.go:317] "Starting service config controller"
	I1014 14:43:58.212601       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1014 14:43:58.212621       1 config.go:226] "Starting endpoint slice config controller"
	I1014 14:43:58.212632       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1014 14:43:58.212930       1 config.go:444] "Starting node config controller"
	I1014 14:43:58.212941       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1014 14:43:58.312723       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1014 14:43:58.312725       1 shared_informer.go:262] Caches are synced for service config
	I1014 14:43:58.312962       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [23162fe92abb] <==
	W1014 14:43:42.096945       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:43:42.096952       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1014 14:43:42.096983       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 14:43:42.096990       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1014 14:43:42.097048       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:43:42.097055       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1014 14:43:42.097090       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 14:43:42.097093       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1014 14:43:42.097191       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:43:42.097198       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1014 14:43:42.925502       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:43:42.925595       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 14:43:42.982055       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:43:42.982079       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1014 14:43:42.990554       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 14:43:42.990607       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1014 14:43:42.996112       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:43:42.996282       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1014 14:43:43.012078       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:43:43.012142       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1014 14:43:43.056922       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:43:43.056995       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1014 14:43:43.082907       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:43:43.082992       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 14:43:45.194361       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-10-14 14:38:41 UTC, ends at Mon 2024-10-14 14:48:01 UTC. --
	Oct 14 14:43:45 running-upgrade-116000 kubelet[12826]: I1014 14:43:45.950324   12826 reconciler.go:157] "Reconciler: start to sync state"
	Oct 14 14:43:46 running-upgrade-116000 kubelet[12826]: E1014 14:43:46.121149   12826 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-116000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-116000"
	Oct 14 14:43:46 running-upgrade-116000 kubelet[12826]: E1014 14:43:46.320630   12826 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-116000\" already exists" pod="kube-system/etcd-running-upgrade-116000"
	Oct 14 14:43:46 running-upgrade-116000 kubelet[12826]: E1014 14:43:46.522541   12826 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-116000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-116000"
	Oct 14 14:43:56 running-upgrade-116000 kubelet[12826]: I1014 14:43:56.988702   12826 topology_manager.go:200] "Topology Admit Handler"
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: I1014 14:43:57.031394   12826 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: I1014 14:43:57.031762   12826 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: I1014 14:43:57.035959   12826 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0f80962e-b021-4414-88ee-5f9c390b6543-tmp\") pod \"storage-provisioner\" (UID: \"0f80962e-b021-4414-88ee-5f9c390b6543\") " pod="kube-system/storage-provisioner"
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: I1014 14:43:57.035998   12826 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gcnf\" (UniqueName: \"kubernetes.io/projected/0f80962e-b021-4414-88ee-5f9c390b6543-kube-api-access-4gcnf\") pod \"storage-provisioner\" (UID: \"0f80962e-b021-4414-88ee-5f9c390b6543\") " pod="kube-system/storage-provisioner"
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: E1014 14:43:57.139628   12826 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: E1014 14:43:57.139650   12826 projected.go:192] Error preparing data for projected volume kube-api-access-4gcnf for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: E1014 14:43:57.139683   12826 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/0f80962e-b021-4414-88ee-5f9c390b6543-kube-api-access-4gcnf podName:0f80962e-b021-4414-88ee-5f9c390b6543 nodeName:}" failed. No retries permitted until 2024-10-14 14:43:57.639671202 +0000 UTC m=+13.162070672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4gcnf" (UniqueName: "kubernetes.io/projected/0f80962e-b021-4414-88ee-5f9c390b6543-kube-api-access-4gcnf") pod "storage-provisioner" (UID: "0f80962e-b021-4414-88ee-5f9c390b6543") : configmap "kube-root-ca.crt" not found
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: I1014 14:43:57.688173   12826 topology_manager.go:200] "Topology Admit Handler"
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: I1014 14:43:57.739397   12826 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/333f9aac-2bc2-4f3f-a37e-fbb67e516379-lib-modules\") pod \"kube-proxy-fmmcp\" (UID: \"333f9aac-2bc2-4f3f-a37e-fbb67e516379\") " pod="kube-system/kube-proxy-fmmcp"
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: I1014 14:43:57.739421   12826 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpqtj\" (UniqueName: \"kubernetes.io/projected/333f9aac-2bc2-4f3f-a37e-fbb67e516379-kube-api-access-xpqtj\") pod \"kube-proxy-fmmcp\" (UID: \"333f9aac-2bc2-4f3f-a37e-fbb67e516379\") " pod="kube-system/kube-proxy-fmmcp"
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: I1014 14:43:57.739442   12826 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/333f9aac-2bc2-4f3f-a37e-fbb67e516379-kube-proxy\") pod \"kube-proxy-fmmcp\" (UID: \"333f9aac-2bc2-4f3f-a37e-fbb67e516379\") " pod="kube-system/kube-proxy-fmmcp"
	Oct 14 14:43:57 running-upgrade-116000 kubelet[12826]: I1014 14:43:57.739453   12826 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/333f9aac-2bc2-4f3f-a37e-fbb67e516379-xtables-lock\") pod \"kube-proxy-fmmcp\" (UID: \"333f9aac-2bc2-4f3f-a37e-fbb67e516379\") " pod="kube-system/kube-proxy-fmmcp"
	Oct 14 14:43:58 running-upgrade-116000 kubelet[12826]: I1014 14:43:58.048542   12826 topology_manager.go:200] "Topology Admit Handler"
	Oct 14 14:43:58 running-upgrade-116000 kubelet[12826]: I1014 14:43:58.048609   12826 topology_manager.go:200] "Topology Admit Handler"
	Oct 14 14:43:58 running-upgrade-116000 kubelet[12826]: I1014 14:43:58.249413   12826 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac07f54e-9e59-4bd1-bafa-5dfe6e507d40-config-volume\") pod \"coredns-6d4b75cb6d-f5knc\" (UID: \"ac07f54e-9e59-4bd1-bafa-5dfe6e507d40\") " pod="kube-system/coredns-6d4b75cb6d-f5knc"
	Oct 14 14:43:58 running-upgrade-116000 kubelet[12826]: I1014 14:43:58.249456   12826 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5d4949c-fcdf-43e4-ad44-39fdb1be3cff-config-volume\") pod \"coredns-6d4b75cb6d-tpl7d\" (UID: \"f5d4949c-fcdf-43e4-ad44-39fdb1be3cff\") " pod="kube-system/coredns-6d4b75cb6d-tpl7d"
	Oct 14 14:43:58 running-upgrade-116000 kubelet[12826]: I1014 14:43:58.249470   12826 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jkw\" (UniqueName: \"kubernetes.io/projected/f5d4949c-fcdf-43e4-ad44-39fdb1be3cff-kube-api-access-k7jkw\") pod \"coredns-6d4b75cb6d-tpl7d\" (UID: \"f5d4949c-fcdf-43e4-ad44-39fdb1be3cff\") " pod="kube-system/coredns-6d4b75cb6d-tpl7d"
	Oct 14 14:43:58 running-upgrade-116000 kubelet[12826]: I1014 14:43:58.249482   12826 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snrk5\" (UniqueName: \"kubernetes.io/projected/ac07f54e-9e59-4bd1-bafa-5dfe6e507d40-kube-api-access-snrk5\") pod \"coredns-6d4b75cb6d-f5knc\" (UID: \"ac07f54e-9e59-4bd1-bafa-5dfe6e507d40\") " pod="kube-system/coredns-6d4b75cb6d-f5knc"
	Oct 14 14:47:36 running-upgrade-116000 kubelet[12826]: I1014 14:47:36.626263   12826 scope.go:110] "RemoveContainer" containerID="a7d107d169c1d2ff555e3eaa96a9e71793326cf18c4f5c6d6c1cdf3546ea9178"
	Oct 14 14:47:36 running-upgrade-116000 kubelet[12826]: I1014 14:47:36.639052   12826 scope.go:110] "RemoveContainer" containerID="ec14ed534d2bf6980e81a91af8afde51b8427ade32f5db18614587d4d114dec0"
	
	
	==> storage-provisioner [b1f8eb243a9e] <==
	I1014 14:43:58.135025       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 14:43:58.148751       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 14:43:58.148769       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 14:43:58.153843       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 14:43:58.155644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0fb5f805-eeae-457e-9303-4cf6737fc812", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-116000_3750b0c0-4cb2-4108-95bd-9a6e2efa8cff became leader
	I1014 14:43:58.156590       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-116000_3750b0c0-4cb2-4108-95bd-9a6e2efa8cff!
	I1014 14:43:58.256705       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-116000_3750b0c0-4cb2-4108-95bd-9a6e2efa8cff!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-116000 -n running-upgrade-116000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-116000 -n running-upgrade-116000: exit status 2 (15.665926125s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-116000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-116000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-116000
--- FAIL: TestRunningBinaryUpgrade (604.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.76s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-491000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-491000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.993735208s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-491000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-491000" primary control-plane node in "kubernetes-upgrade-491000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-491000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:37:54.939501    3924 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:37:54.939671    3924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:37:54.939674    3924 out.go:358] Setting ErrFile to fd 2...
	I1014 07:37:54.939677    3924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:37:54.939798    3924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:37:54.940990    3924 out.go:352] Setting JSON to false
	I1014 07:37:54.958782    3924 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4044,"bootTime":1728912630,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:37:54.958879    3924 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:37:54.963545    3924 out.go:177] * [kubernetes-upgrade-491000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:37:54.977740    3924 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:37:54.977799    3924 notify.go:220] Checking for updates...
	I1014 07:37:54.983481    3924 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:37:54.987496    3924 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:37:54.990526    3924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:37:54.993473    3924 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:37:54.996498    3924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:37:54.999918    3924 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:37:55.000001    3924 config.go:182] Loaded profile config "offline-docker-533000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:37:55.000054    3924 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:37:55.003471    3924 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:37:55.010486    3924 start.go:297] selected driver: qemu2
	I1014 07:37:55.010494    3924 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:37:55.010500    3924 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:37:55.012968    3924 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:37:55.014281    3924 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:37:55.017698    3924 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 07:37:55.017713    3924 cni.go:84] Creating CNI manager for ""
	I1014 07:37:55.017737    3924 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1014 07:37:55.017775    3924 start.go:340] cluster config:
	{Name:kubernetes-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:37:55.022705    3924 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:37:55.031555    3924 out.go:177] * Starting "kubernetes-upgrade-491000" primary control-plane node in "kubernetes-upgrade-491000" cluster
	I1014 07:37:55.035533    3924 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 07:37:55.035552    3924 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1014 07:37:55.035566    3924 cache.go:56] Caching tarball of preloaded images
	I1014 07:37:55.035659    3924 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:37:55.035665    3924 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1014 07:37:55.035733    3924 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/kubernetes-upgrade-491000/config.json ...
	I1014 07:37:55.035744    3924 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/kubernetes-upgrade-491000/config.json: {Name:mk7b9df83df304ffcdd8f65e9ad2b0e6293eae47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:37:55.036194    3924 start.go:360] acquireMachinesLock for kubernetes-upgrade-491000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:37:55.036242    3924 start.go:364] duration metric: took 41.75µs to acquireMachinesLock for "kubernetes-upgrade-491000"
	I1014 07:37:55.036257    3924 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:37:55.036281    3924 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:37:55.039485    3924 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:37:55.056477    3924 start.go:159] libmachine.API.Create for "kubernetes-upgrade-491000" (driver="qemu2")
	I1014 07:37:55.056511    3924 client.go:168] LocalClient.Create starting
	I1014 07:37:55.056576    3924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:37:55.056615    3924 main.go:141] libmachine: Decoding PEM data...
	I1014 07:37:55.056628    3924 main.go:141] libmachine: Parsing certificate...
	I1014 07:37:55.056666    3924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:37:55.056695    3924 main.go:141] libmachine: Decoding PEM data...
	I1014 07:37:55.056704    3924 main.go:141] libmachine: Parsing certificate...
	I1014 07:37:55.057096    3924 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:37:55.214626    3924 main.go:141] libmachine: Creating SSH key...
	I1014 07:37:55.330026    3924 main.go:141] libmachine: Creating Disk image...
	I1014 07:37:55.330032    3924 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:37:55.330220    3924 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2
	I1014 07:37:55.340261    3924 main.go:141] libmachine: STDOUT: 
	I1014 07:37:55.340277    3924 main.go:141] libmachine: STDERR: 
	I1014 07:37:55.340342    3924 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2 +20000M
	I1014 07:37:55.348934    3924 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:37:55.348954    3924 main.go:141] libmachine: STDERR: 
	I1014 07:37:55.348977    3924 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2
	I1014 07:37:55.348981    3924 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:37:55.348996    3924 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:37:55.349032    3924 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:45:27:7a:ec:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2
	I1014 07:37:55.350871    3924 main.go:141] libmachine: STDOUT: 
	I1014 07:37:55.350885    3924 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:37:55.350907    3924 client.go:171] duration metric: took 294.396792ms to LocalClient.Create
	I1014 07:37:57.353073    3924 start.go:128] duration metric: took 2.316808208s to createHost
	I1014 07:37:57.353173    3924 start.go:83] releasing machines lock for "kubernetes-upgrade-491000", held for 2.316973584s
	W1014 07:37:57.353224    3924 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:37:57.368534    3924 out.go:177] * Deleting "kubernetes-upgrade-491000" in qemu2 ...
	W1014 07:37:57.394265    3924 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:37:57.394298    3924 start.go:729] Will try again in 5 seconds ...
	I1014 07:38:02.396460    3924 start.go:360] acquireMachinesLock for kubernetes-upgrade-491000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:38:02.406458    3924 start.go:364] duration metric: took 9.871334ms to acquireMachinesLock for "kubernetes-upgrade-491000"
	I1014 07:38:02.406662    3924 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:38:02.406866    3924 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:38:02.418505    3924 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:38:02.466973    3924 start.go:159] libmachine.API.Create for "kubernetes-upgrade-491000" (driver="qemu2")
	I1014 07:38:02.467019    3924 client.go:168] LocalClient.Create starting
	I1014 07:38:02.467118    3924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:38:02.467165    3924 main.go:141] libmachine: Decoding PEM data...
	I1014 07:38:02.467183    3924 main.go:141] libmachine: Parsing certificate...
	I1014 07:38:02.467248    3924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:38:02.467278    3924 main.go:141] libmachine: Decoding PEM data...
	I1014 07:38:02.467292    3924 main.go:141] libmachine: Parsing certificate...
	I1014 07:38:02.467804    3924 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:38:02.667005    3924 main.go:141] libmachine: Creating SSH key...
	I1014 07:38:02.838355    3924 main.go:141] libmachine: Creating Disk image...
	I1014 07:38:02.838364    3924 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:38:02.838578    3924 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2
	I1014 07:38:02.849193    3924 main.go:141] libmachine: STDOUT: 
	I1014 07:38:02.849209    3924 main.go:141] libmachine: STDERR: 
	I1014 07:38:02.849282    3924 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2 +20000M
	I1014 07:38:02.857788    3924 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:38:02.857808    3924 main.go:141] libmachine: STDERR: 
	I1014 07:38:02.857824    3924 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2
	I1014 07:38:02.857829    3924 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:38:02.857842    3924 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:38:02.857872    3924 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:b8:9f:6e:9c:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2
	I1014 07:38:02.859738    3924 main.go:141] libmachine: STDOUT: 
	I1014 07:38:02.859752    3924 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:38:02.859765    3924 client.go:171] duration metric: took 392.74925ms to LocalClient.Create
	I1014 07:38:04.861930    3924 start.go:128] duration metric: took 2.455037041s to createHost
	I1014 07:38:04.862028    3924 start.go:83] releasing machines lock for "kubernetes-upgrade-491000", held for 2.455597916s
	W1014 07:38:04.862488    3924 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-491000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-491000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:38:04.871141    3924 out.go:201] 
	W1014 07:38:04.877152    3924 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:38:04.877201    3924 out.go:270] * 
	* 
	W1014 07:38:04.880043    3924 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:38:04.889091    3924 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-491000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-491000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-491000: (3.337266208s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-491000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-491000 status --format={{.Host}}: exit status 7 (71.328875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-491000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-491000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.206738042s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-491000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-491000" primary control-plane node in "kubernetes-upgrade-491000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-491000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-491000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:38:08.346126    3984 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:38:08.346278    3984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:38:08.346281    3984 out.go:358] Setting ErrFile to fd 2...
	I1014 07:38:08.346283    3984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:38:08.346424    3984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:38:08.347570    3984 out.go:352] Setting JSON to false
	I1014 07:38:08.365549    3984 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4058,"bootTime":1728912630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:38:08.365641    3984 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:38:08.369641    3984 out.go:177] * [kubernetes-upgrade-491000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:38:08.376606    3984 notify.go:220] Checking for updates...
	I1014 07:38:08.379488    3984 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:38:08.384718    3984 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:38:08.392492    3984 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:38:08.399512    3984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:38:08.406478    3984 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:38:08.414527    3984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:38:08.417794    3984 config.go:182] Loaded profile config "kubernetes-upgrade-491000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1014 07:38:08.418070    3984 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:38:08.422518    3984 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:38:08.428503    3984 start.go:297] selected driver: qemu2
	I1014 07:38:08.428509    3984 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-491000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:38:08.428573    3984 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:38:08.431169    3984 cni.go:84] Creating CNI manager for ""
	I1014 07:38:08.431199    3984 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:38:08.431229    3984 start.go:340] cluster config:
	{Name:kubernetes-upgrade-491000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-491000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:38:08.435591    3984 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:38:08.443566    3984 out.go:177] * Starting "kubernetes-upgrade-491000" primary control-plane node in "kubernetes-upgrade-491000" cluster
	I1014 07:38:08.447485    3984 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:38:08.447500    3984 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:38:08.447509    3984 cache.go:56] Caching tarball of preloaded images
	I1014 07:38:08.447579    3984 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:38:08.447585    3984 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:38:08.447634    3984 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/kubernetes-upgrade-491000/config.json ...
	I1014 07:38:08.448114    3984 start.go:360] acquireMachinesLock for kubernetes-upgrade-491000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:38:08.448164    3984 start.go:364] duration metric: took 43µs to acquireMachinesLock for "kubernetes-upgrade-491000"
	I1014 07:38:08.448174    3984 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:38:08.448179    3984 fix.go:54] fixHost starting: 
	I1014 07:38:08.448320    3984 fix.go:112] recreateIfNeeded on kubernetes-upgrade-491000: state=Stopped err=<nil>
	W1014 07:38:08.448330    3984 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:38:08.455500    3984 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-491000" ...
	I1014 07:38:08.459534    3984 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:38:08.459580    3984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:b8:9f:6e:9c:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2
	I1014 07:38:08.461668    3984 main.go:141] libmachine: STDOUT: 
	I1014 07:38:08.461688    3984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:38:08.461713    3984 fix.go:56] duration metric: took 13.534708ms for fixHost
	I1014 07:38:08.461719    3984 start.go:83] releasing machines lock for "kubernetes-upgrade-491000", held for 13.551208ms
	W1014 07:38:08.461725    3984 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:38:08.461761    3984 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:38:08.461766    3984 start.go:729] Will try again in 5 seconds ...
	I1014 07:38:13.463954    3984 start.go:360] acquireMachinesLock for kubernetes-upgrade-491000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:38:13.464397    3984 start.go:364] duration metric: took 333.167µs to acquireMachinesLock for "kubernetes-upgrade-491000"
	I1014 07:38:13.464533    3984 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:38:13.464554    3984 fix.go:54] fixHost starting: 
	I1014 07:38:13.465278    3984 fix.go:112] recreateIfNeeded on kubernetes-upgrade-491000: state=Stopped err=<nil>
	W1014 07:38:13.465306    3984 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:38:13.472916    3984 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-491000" ...
	I1014 07:38:13.476854    3984 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:38:13.477041    3984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:b8:9f:6e:9c:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubernetes-upgrade-491000/disk.qcow2
	I1014 07:38:13.487389    3984 main.go:141] libmachine: STDOUT: 
	I1014 07:38:13.487454    3984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:38:13.487537    3984 fix.go:56] duration metric: took 22.98575ms for fixHost
	I1014 07:38:13.487561    3984 start.go:83] releasing machines lock for "kubernetes-upgrade-491000", held for 23.13975ms
	W1014 07:38:13.487792    3984 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-491000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-491000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:38:13.494769    3984 out.go:201] 
	W1014 07:38:13.498886    3984 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:38:13.498926    3984 out.go:270] * 
	* 
	W1014 07:38:13.501102    3984 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:38:13.509815    3984 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-491000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-491000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-491000 version --output=json: exit status 1 (33.259375ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-491000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-14 07:38:13.55317 -0700 PDT m=+3615.890580376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-491000 -n kubernetes-upgrade-491000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-491000 -n kubernetes-upgrade-491000: exit status 7 (37.095875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-491000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-491000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-491000
--- FAIL: TestKubernetesUpgrade (18.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (618.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.708288134 start -p stopped-upgrade-496000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.708288134 start -p stopped-upgrade-496000 --memory=2200 --vm-driver=qemu2 : (1m25.859846292s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.708288134 -p stopped-upgrade-496000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.708288134 -p stopped-upgrade-496000 stop: (12.110550833s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-496000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1014 07:41:57.124533    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:43:29.590581    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:43:46.488812    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
E1014 07:46:57.152169    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-496000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.158950375s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-496000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-496000" primary control-plane node in "stopped-upgrade-496000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-496000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:39:44.024411    4105 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:39:44.024795    4105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:39:44.024799    4105 out.go:358] Setting ErrFile to fd 2...
	I1014 07:39:44.024802    4105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:39:44.024933    4105 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:39:44.026163    4105 out.go:352] Setting JSON to false
	I1014 07:39:44.046735    4105 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4154,"bootTime":1728912630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:39:44.046835    4105 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:39:44.051325    4105 out.go:177] * [stopped-upgrade-496000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:39:44.059203    4105 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:39:44.059289    4105 notify.go:220] Checking for updates...
	I1014 07:39:44.067139    4105 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:39:44.070176    4105 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:39:44.073179    4105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:39:44.076215    4105 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:39:44.079235    4105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:39:44.082487    4105 config.go:182] Loaded profile config "stopped-upgrade-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1014 07:39:44.086123    4105 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 07:39:44.089144    4105 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:39:44.092122    4105 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:39:44.099204    4105 start.go:297] selected driver: qemu2
	I1014 07:39:44.099210    4105 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1014 07:39:44.099274    4105 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:39:44.102108    4105 cni.go:84] Creating CNI manager for ""
	I1014 07:39:44.102146    4105 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:39:44.102179    4105 start.go:340] cluster config:
	{Name:stopped-upgrade-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1014 07:39:44.102238    4105 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:39:44.109249    4105 out.go:177] * Starting "stopped-upgrade-496000" primary control-plane node in "stopped-upgrade-496000" cluster
	I1014 07:39:44.113184    4105 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1014 07:39:44.113199    4105 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1014 07:39:44.113204    4105 cache.go:56] Caching tarball of preloaded images
	I1014 07:39:44.113279    4105 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:39:44.113285    4105 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1014 07:39:44.113338    4105 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/config.json ...
	I1014 07:39:44.113796    4105 start.go:360] acquireMachinesLock for stopped-upgrade-496000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:39:44.113845    4105 start.go:364] duration metric: took 43.209µs to acquireMachinesLock for "stopped-upgrade-496000"
	I1014 07:39:44.113856    4105 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:39:44.113861    4105 fix.go:54] fixHost starting: 
	I1014 07:39:44.113988    4105 fix.go:112] recreateIfNeeded on stopped-upgrade-496000: state=Stopped err=<nil>
	W1014 07:39:44.113996    4105 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:39:44.122163    4105 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-496000" ...
	I1014 07:39:44.126186    4105 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:39:44.126291    4105 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/qemu.pid -nic user,model=virtio,hostfwd=tcp::61428-:22,hostfwd=tcp::61429-:2376,hostname=stopped-upgrade-496000 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/disk.qcow2
	I1014 07:39:44.173881    4105 main.go:141] libmachine: STDOUT: 
	I1014 07:39:44.173914    4105 main.go:141] libmachine: STDERR: 
	I1014 07:39:44.173919    4105 main.go:141] libmachine: Waiting for VM to start (ssh -p 61428 docker@127.0.0.1)...
	I1014 07:40:03.367095    4105 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/config.json ...
	I1014 07:40:03.367607    4105 machine.go:93] provisionDockerMachine start ...
	I1014 07:40:03.367724    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:03.367991    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:03.368000    4105 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:40:03.443237    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:40:03.443257    4105 buildroot.go:166] provisioning hostname "stopped-upgrade-496000"
	I1014 07:40:03.443364    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:03.443558    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:03.443569    4105 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-496000 && echo "stopped-upgrade-496000" | sudo tee /etc/hostname
	I1014 07:40:03.516439    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-496000
	
	I1014 07:40:03.516529    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:03.516674    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:03.516684    4105 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-496000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-496000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-496000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:40:03.584409    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:40:03.584421    4105 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19790-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19790-979/.minikube}
	I1014 07:40:03.584431    4105 buildroot.go:174] setting up certificates
	I1014 07:40:03.584435    4105 provision.go:84] configureAuth start
	I1014 07:40:03.584438    4105 provision.go:143] copyHostCerts
	I1014 07:40:03.584521    4105 exec_runner.go:144] found /Users/jenkins/minikube-integration/19790-979/.minikube/ca.pem, removing ...
	I1014 07:40:03.584528    4105 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19790-979/.minikube/ca.pem
	I1014 07:40:03.584636    4105 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19790-979/.minikube/ca.pem (1078 bytes)
	I1014 07:40:03.584839    4105 exec_runner.go:144] found /Users/jenkins/minikube-integration/19790-979/.minikube/cert.pem, removing ...
	I1014 07:40:03.584844    4105 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19790-979/.minikube/cert.pem
	I1014 07:40:03.584905    4105 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19790-979/.minikube/cert.pem (1123 bytes)
	I1014 07:40:03.585026    4105 exec_runner.go:144] found /Users/jenkins/minikube-integration/19790-979/.minikube/key.pem, removing ...
	I1014 07:40:03.585030    4105 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19790-979/.minikube/key.pem
	I1014 07:40:03.585083    4105 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19790-979/.minikube/key.pem (1675 bytes)
	I1014 07:40:03.585171    4105 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19790-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-496000 san=[127.0.0.1 localhost minikube stopped-upgrade-496000]
	I1014 07:40:03.878183    4105 provision.go:177] copyRemoteCerts
	I1014 07:40:03.878258    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:40:03.878269    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	I1014 07:40:03.910736    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 07:40:03.917778    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 07:40:03.925077    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:40:03.932419    4105 provision.go:87] duration metric: took 347.982333ms to configureAuth
	I1014 07:40:03.932429    4105 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:40:03.932526    4105 config.go:182] Loaded profile config "stopped-upgrade-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1014 07:40:03.932572    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:03.932655    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:03.932660    4105 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:40:03.994501    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:40:03.994510    4105 buildroot.go:70] root file system type: tmpfs
	I1014 07:40:03.994560    4105 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:40:03.994617    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:03.994736    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:03.994770    4105 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:40:04.058226    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:40:04.058284    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:04.058379    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:04.058388    4105 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:40:04.433815    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:40:04.433828    4105 machine.go:96] duration metric: took 1.066236875s to provisionDockerMachine
	I1014 07:40:04.433835    4105 start.go:293] postStartSetup for "stopped-upgrade-496000" (driver="qemu2")
	I1014 07:40:04.433842    4105 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:40:04.433920    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:40:04.433929    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	I1014 07:40:04.468119    4105 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:40:04.469443    4105 info.go:137] Remote host: Buildroot 2021.02.12
	I1014 07:40:04.469450    4105 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19790-979/.minikube/addons for local assets ...
	I1014 07:40:04.469535    4105 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19790-979/.minikube/files for local assets ...
	I1014 07:40:04.469678    4105 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem -> 14972.pem in /etc/ssl/certs
	I1014 07:40:04.469843    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:40:04.472823    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem --> /etc/ssl/certs/14972.pem (1708 bytes)
	I1014 07:40:04.480246    4105 start.go:296] duration metric: took 46.406083ms for postStartSetup
	I1014 07:40:04.480259    4105 fix.go:56] duration metric: took 20.366856584s for fixHost
	I1014 07:40:04.480301    4105 main.go:141] libmachine: Using SSH client type: native
	I1014 07:40:04.480407    4105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a8e480] 0x104a90cc0 <nil>  [] 0s} localhost 61428 <nil> <nil>}
	I1014 07:40:04.480411    4105 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:40:04.537993    4105 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728916804.544200337
	
	I1014 07:40:04.538002    4105 fix.go:216] guest clock: 1728916804.544200337
	I1014 07:40:04.538006    4105 fix.go:229] Guest: 2024-10-14 07:40:04.544200337 -0700 PDT Remote: 2024-10-14 07:40:04.480261 -0700 PDT m=+20.488572085 (delta=63.939337ms)
	I1014 07:40:04.538021    4105 fix.go:200] guest clock delta is within tolerance: 63.939337ms
	I1014 07:40:04.538024    4105 start.go:83] releasing machines lock for "stopped-upgrade-496000", held for 20.424632834s
	I1014 07:40:04.538103    4105 ssh_runner.go:195] Run: cat /version.json
	I1014 07:40:04.538114    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	I1014 07:40:04.538103    4105 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 07:40:04.538150    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	W1014 07:40:04.538763    4105 sshutil.go:64] dial failure (will retry): dial tcp [::1]:61428: connect: connection refused
	I1014 07:40:04.538777    4105 retry.go:31] will retry after 258.099859ms: dial tcp [::1]:61428: connect: connection refused
	W1014 07:40:04.569895    4105 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1014 07:40:04.569944    4105 ssh_runner.go:195] Run: systemctl --version
	I1014 07:40:04.571708    4105 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:40:04.573345    4105 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:40:04.573379    4105 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1014 07:40:04.576547    4105 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1014 07:40:04.581400    4105 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:40:04.581408    4105 start.go:495] detecting cgroup driver to use...
	I1014 07:40:04.581490    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:40:04.588737    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1014 07:40:04.592344    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:40:04.595728    4105 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:40:04.595761    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:40:04.599623    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:40:04.602702    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:40:04.605767    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:40:04.609019    4105 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:40:04.612628    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:40:04.616434    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:40:04.620282    4105 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:40:04.623751    4105 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:40:04.627259    4105 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:40:04.630101    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:04.712974    4105 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:40:04.720000    4105 start.go:495] detecting cgroup driver to use...
	I1014 07:40:04.720089    4105 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:40:04.728599    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:40:04.736763    4105 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:40:04.743531    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:40:04.748127    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:40:04.752829    4105 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:40:04.792608    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:40:04.797883    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:40:04.804199    4105 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:40:04.806244    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:40:04.809419    4105 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1014 07:40:04.815202    4105 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:40:04.891530    4105 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:40:04.973687    4105 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:40:04.973757    4105 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:40:04.978989    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:05.053383    4105 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:40:06.198264    4105 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.144890125s)
	I1014 07:40:06.198355    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:40:06.207468    4105 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1014 07:40:06.213345    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:40:06.218176    4105 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:40:06.294831    4105 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:40:06.370854    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:06.445873    4105 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:40:06.452227    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:40:06.457184    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:06.534646    4105 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:40:06.572105    4105 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:40:06.572199    4105 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:40:06.574316    4105 start.go:563] Will wait 60s for crictl version
	I1014 07:40:06.574381    4105 ssh_runner.go:195] Run: which crictl
	I1014 07:40:06.575761    4105 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:40:06.591820    4105 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1014 07:40:06.591906    4105 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:40:06.608872    4105 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:40:06.629639    4105 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1014 07:40:06.629725    4105 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1014 07:40:06.631007    4105 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:40:06.634559    4105 kubeadm.go:883] updating cluster {Name:stopped-upgrade-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1014 07:40:06.634614    4105 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1014 07:40:06.634665    4105 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:40:06.644816    4105 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:40:06.644832    4105 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1014 07:40:06.644901    4105 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:40:06.648417    4105 ssh_runner.go:195] Run: which lz4
	I1014 07:40:06.649586    4105 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:40:06.650894    4105 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:40:06.650904    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1014 07:40:07.637405    4105 docker.go:653] duration metric: took 987.872958ms to copy over tarball
	I1014 07:40:07.637483    4105 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:40:08.801841    4105 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.164368166s)
	I1014 07:40:08.801860    4105 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:40:08.818038    4105 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:40:08.821180    4105 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1014 07:40:08.826548    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:08.896052    4105 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:40:10.394121    4105 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.49808275s)
	I1014 07:40:10.394245    4105 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:40:10.408937    4105 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:40:10.408948    4105 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1014 07:40:10.408954    4105 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 07:40:10.415106    4105 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:40:10.417246    4105 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:40:10.419511    4105 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:40:10.419544    4105 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:40:10.421458    4105 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:40:10.421485    4105 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:40:10.422705    4105 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:40:10.422729    4105 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1014 07:40:10.424708    4105 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:40:10.424743    4105 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:40:10.425919    4105 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1014 07:40:10.426093    4105 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1014 07:40:10.427464    4105 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:40:10.427472    4105 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:40:10.428333    4105 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1014 07:40:10.429582    4105 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:40:10.989878    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:40:10.996822    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:40:11.001436    4105 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1014 07:40:11.001469    4105 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:40:11.001528    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1014 07:40:11.009654    4105 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1014 07:40:11.009677    4105 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:40:11.009724    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1014 07:40:11.011508    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:40:11.019242    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1014 07:40:11.025637    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1014 07:40:11.031114    4105 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1014 07:40:11.031140    4105 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:40:11.031198    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1014 07:40:11.041603    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1014 07:40:11.077320    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1014 07:40:11.088288    4105 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1014 07:40:11.088308    4105 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1014 07:40:11.088370    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1014 07:40:11.098905    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1014 07:40:11.099051    4105 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1014 07:40:11.100599    4105 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1014 07:40:11.100611    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1014 07:40:11.113236    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:40:11.143710    4105 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1014 07:40:11.143741    4105 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:40:11.143808    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1014 07:40:11.174487    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1014 07:40:11.220125    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W1014 07:40:11.241317    4105 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1014 07:40:11.241490    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:40:11.267743    4105 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1014 07:40:11.267767    4105 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1014 07:40:11.267846    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1014 07:40:11.277941    4105 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1014 07:40:11.277975    4105 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:40:11.278046    4105 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1014 07:40:11.299188    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1014 07:40:11.299341    4105 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1014 07:40:11.329423    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1014 07:40:11.329435    4105 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1014 07:40:11.329458    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1014 07:40:11.329587    4105 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1014 07:40:11.344410    4105 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1014 07:40:11.344436    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1014 07:40:11.368870    4105 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1014 07:40:11.368885    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W1014 07:40:11.381470    4105 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1014 07:40:11.381642    4105 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:40:11.416669    4105 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1014 07:40:11.416689    4105 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1014 07:40:11.416695    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1014 07:40:11.421423    4105 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1014 07:40:11.421444    4105 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:40:11.421510    4105 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:40:11.574771    4105 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1014 07:40:11.574808    4105 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1014 07:40:11.574811    4105 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1014 07:40:11.574843    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1014 07:40:11.574954    4105 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1014 07:40:11.619355    4105 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1014 07:40:11.619435    4105 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1014 07:40:11.619470    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1014 07:40:11.650366    4105 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1014 07:40:11.650380    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1014 07:40:11.887118    4105 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1014 07:40:11.887158    4105 cache_images.go:92] duration metric: took 1.478229708s to LoadCachedImages
	W1014 07:40:11.887198    4105 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1014 07:40:11.887203    4105 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1014 07:40:11.887264    4105 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-496000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:40:11.887338    4105 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:40:11.901844    4105 cni.go:84] Creating CNI manager for ""
	I1014 07:40:11.901855    4105 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:40:11.901862    4105 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:40:11.901875    4105 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-496000 NodeName:stopped-upgrade-496000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:40:11.901957    4105 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-496000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:40:11.902026    4105 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1014 07:40:11.904865    4105 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:40:11.904895    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 07:40:11.907798    4105 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1014 07:40:11.912815    4105 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:40:11.918067    4105 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1014 07:40:11.923614    4105 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1014 07:40:11.924885    4105 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:40:11.928504    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:40:12.008710    4105 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:40:12.014444    4105 certs.go:68] Setting up /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000 for IP: 10.0.2.15
	I1014 07:40:12.014452    4105 certs.go:194] generating shared ca certs ...
	I1014 07:40:12.014461    4105 certs.go:226] acquiring lock for ca certs: {Name:mk8f9f58f46caac35c7cea538c3ba1c75987d64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:40:12.014661    4105 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19790-979/.minikube/ca.key
	I1014 07:40:12.022831    4105 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19790-979/.minikube/proxy-client-ca.key
	I1014 07:40:12.022846    4105 certs.go:256] generating profile certs ...
	I1014 07:40:12.025923    4105 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/client.key
	I1014 07:40:12.025942    4105 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key.12644273
	I1014 07:40:12.025957    4105 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt.12644273 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1014 07:40:12.154397    4105 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt.12644273 ...
	I1014 07:40:12.154411    4105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt.12644273: {Name:mkc366bf23829c486d581f5bceceede0ef407704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:40:12.155028    4105 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key.12644273 ...
	I1014 07:40:12.155034    4105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key.12644273: {Name:mkcbebf3d6840e9e2ea115c6f567cb363f7a5ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:40:12.156534    4105 certs.go:381] copying /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt.12644273 -> /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt
	I1014 07:40:12.156688    4105 certs.go:385] copying /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key.12644273 -> /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key
	I1014 07:40:12.160104    4105 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/proxy-client.key
	I1014 07:40:12.160270    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/1497.pem (1338 bytes)
	W1014 07:40:12.160463    4105 certs.go:480] ignoring /Users/jenkins/minikube-integration/19790-979/.minikube/certs/1497_empty.pem, impossibly tiny 0 bytes
	I1014 07:40:12.160471    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 07:40:12.160518    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem (1078 bytes)
	I1014 07:40:12.160552    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem (1123 bytes)
	I1014 07:40:12.160583    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/certs/key.pem (1675 bytes)
	I1014 07:40:12.160662    4105 certs.go:484] found cert: /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem (1708 bytes)
	I1014 07:40:12.161037    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:40:12.168628    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 07:40:12.176132    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:40:12.183216    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:40:12.190049    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 07:40:12.196839    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:40:12.204291    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:40:12.211700    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 07:40:12.218042    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:40:12.224547    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/certs/1497.pem --> /usr/share/ca-certificates/1497.pem (1338 bytes)
	I1014 07:40:12.231875    4105 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/ssl/certs/14972.pem --> /usr/share/ca-certificates/14972.pem (1708 bytes)
	I1014 07:40:12.238977    4105 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:40:12.243938    4105 ssh_runner.go:195] Run: openssl version
	I1014 07:40:12.245901    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:40:12.249171    4105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:40:12.250583    4105 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:40:12.250616    4105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:40:12.252279    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:40:12.255288    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1497.pem && ln -fs /usr/share/ca-certificates/1497.pem /etc/ssl/certs/1497.pem"
	I1014 07:40:12.258101    4105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1497.pem
	I1014 07:40:12.259439    4105 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:46 /usr/share/ca-certificates/1497.pem
	I1014 07:40:12.259468    4105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1497.pem
	I1014 07:40:12.261278    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1497.pem /etc/ssl/certs/51391683.0"
	I1014 07:40:12.264581    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14972.pem && ln -fs /usr/share/ca-certificates/14972.pem /etc/ssl/certs/14972.pem"
	I1014 07:40:12.268045    4105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14972.pem
	I1014 07:40:12.269460    4105 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:46 /usr/share/ca-certificates/14972.pem
	I1014 07:40:12.269485    4105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14972.pem
	I1014 07:40:12.271237    4105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14972.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:40:12.274069    4105 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:40:12.275482    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 07:40:12.277613    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 07:40:12.279660    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 07:40:12.281579    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 07:40:12.283391    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 07:40:12.285154    4105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 07:40:12.287153    4105 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61521 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1014 07:40:12.287224    4105 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:40:12.299953    4105 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:40:12.303272    4105 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 07:40:12.303282    4105 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 07:40:12.303311    4105 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 07:40:12.306082    4105 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:40:12.306566    4105 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-496000" does not appear in /Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:40:12.306670    4105 kubeconfig.go:62] /Users/jenkins/minikube-integration/19790-979/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-496000" cluster setting kubeconfig missing "stopped-upgrade-496000" context setting]
	I1014 07:40:12.306875    4105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/kubeconfig: {Name:mkbe79fce3a1d9ddd6036a978e097f20767985b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:40:12.307326    4105 kapi.go:59] client config for stopped-upgrade-496000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/client.key", CAFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1064e6e40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:40:12.307772    4105 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 07:40:12.310392    4105 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-496000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1014 07:40:12.310396    4105 kubeadm.go:1160] stopping kube-system containers ...
	I1014 07:40:12.310445    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:40:12.320962    4105 docker.go:483] Stopping containers: [01fe0352d451 88a3564ca66c ef8f73ba51dc 75b8f83bcedd d8ecc7085555 49cd8b0e5006 5c35a795ce9a 3a8b6183f21a]
	I1014 07:40:12.321055    4105 ssh_runner.go:195] Run: docker stop 01fe0352d451 88a3564ca66c ef8f73ba51dc 75b8f83bcedd d8ecc7085555 49cd8b0e5006 5c35a795ce9a 3a8b6183f21a
	I1014 07:40:12.332222    4105 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 07:40:12.338018    4105 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:40:12.341013    4105 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:40:12.341020    4105 kubeadm.go:157] found existing configuration files:
	
	I1014 07:40:12.341053    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/admin.conf
	I1014 07:40:12.344155    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:40:12.344194    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:40:12.346967    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/kubelet.conf
	I1014 07:40:12.349399    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:40:12.349451    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:40:12.352685    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/controller-manager.conf
	I1014 07:40:12.355643    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:40:12.355676    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:40:12.358297    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/scheduler.conf
	I1014 07:40:12.361009    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:40:12.361036    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:40:12.364134    4105 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:40:12.366862    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:40:12.390378    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:40:12.786516    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:40:12.915557    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:40:12.947099    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:40:12.969063    4105 api_server.go:52] waiting for apiserver process to appear ...
	I1014 07:40:12.969155    4105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:40:13.471516    4105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:40:13.971248    4105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:40:13.975566    4105 api_server.go:72] duration metric: took 1.006525625s to wait for apiserver process to appear ...
	I1014 07:40:13.975577    4105 api_server.go:88] waiting for apiserver healthz status ...
	I1014 07:40:13.975592    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:18.977573    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:18.977625    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:23.978080    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:23.978127    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:28.978661    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:28.978752    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:33.979890    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:33.979930    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:38.980892    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:38.980984    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:43.982534    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:43.982553    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:48.983952    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:48.984002    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:53.986191    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:53.986240    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:40:58.988429    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:40:58.988471    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:03.989349    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:03.989394    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:08.989627    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:08.989649    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:13.990934    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:13.991417    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:14.026306    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:14.026511    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:14.046183    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:14.046309    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:14.060763    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:14.060851    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:14.073369    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:14.073450    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:14.084306    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:14.084441    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:14.095527    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:14.095619    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:14.105613    4105 logs.go:282] 0 containers: []
	W1014 07:41:14.105628    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:14.105695    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:14.116075    4105 logs.go:282] 0 containers: []
	W1014 07:41:14.116088    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:14.116095    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:14.116101    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:14.131194    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:14.131204    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:14.144051    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:14.144062    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:14.183469    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:14.183480    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:14.210159    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:14.210170    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:14.221600    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:14.221613    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:14.233924    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:14.233935    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:14.259047    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:14.259057    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:14.369959    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:14.369971    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:14.384058    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:14.384069    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:14.400607    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:14.400618    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:14.412119    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:14.412130    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:14.437168    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:14.437179    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:14.450994    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:14.451005    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:14.471090    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:14.471103    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:16.976296    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:21.977022    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:21.977634    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:22.016286    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:22.016473    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:22.039594    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:22.039725    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:22.054993    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:22.055083    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:22.066853    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:22.066940    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:22.077810    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:22.077894    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:22.092061    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:22.092142    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:22.111131    4105 logs.go:282] 0 containers: []
	W1014 07:41:22.111144    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:22.111221    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:22.122378    4105 logs.go:282] 0 containers: []
	W1014 07:41:22.122391    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:22.122401    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:22.122406    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:22.138014    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:22.138025    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:22.150591    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:22.150602    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:22.174726    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:22.174734    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:22.211642    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:22.211652    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:22.225915    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:22.225926    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:22.237262    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:22.237274    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:22.252874    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:22.252886    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:22.270054    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:22.270063    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:22.284075    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:22.284085    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:22.288473    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:22.288482    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:22.320972    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:22.320983    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:22.335335    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:22.335346    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:22.346975    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:22.346989    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:22.385479    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:22.385494    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:24.902094    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:29.904668    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:29.904852    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:29.920346    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:29.920437    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:29.932024    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:29.932109    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:29.943140    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:29.943217    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:29.954145    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:29.954229    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:29.964353    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:29.964433    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:29.975225    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:29.975301    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:29.994312    4105 logs.go:282] 0 containers: []
	W1014 07:41:29.994325    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:29.994394    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:30.004882    4105 logs.go:282] 0 containers: []
	W1014 07:41:30.004893    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:30.004901    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:30.004907    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:30.044241    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:30.044249    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:30.048861    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:30.048867    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:30.062958    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:30.062968    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:30.075810    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:30.075820    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:30.092218    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:30.092227    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:30.118187    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:30.118196    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:30.155318    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:30.155335    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:30.173964    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:30.173978    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:30.199134    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:30.199146    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:30.212551    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:30.212562    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:30.224273    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:30.224288    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:30.241601    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:30.241611    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:30.254083    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:30.254098    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:30.267690    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:30.267700    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:32.781957    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:37.782766    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:37.782915    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:37.796845    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:37.796939    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:37.810935    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:37.811018    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:37.825992    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:37.826070    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:37.836212    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:37.836298    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:37.848660    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:37.848737    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:37.859268    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:37.859360    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:37.869395    4105 logs.go:282] 0 containers: []
	W1014 07:41:37.869406    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:37.869474    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:37.879527    4105 logs.go:282] 0 containers: []
	W1014 07:41:37.879539    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:37.879548    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:37.879554    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:37.918607    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:37.918616    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:37.922778    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:37.922783    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:37.948023    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:37.948035    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:37.984254    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:37.984265    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:37.998067    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:37.998081    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:38.009305    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:38.009316    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:38.023760    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:38.023772    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:38.042548    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:38.042562    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:38.053796    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:38.053807    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:38.078761    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:38.078777    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:38.092397    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:38.092410    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:38.116320    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:38.116330    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:38.127861    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:38.127872    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:38.145104    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:38.145114    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:40.661238    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:45.663773    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:45.664018    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:45.685544    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:45.685650    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:45.698484    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:45.698570    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:45.709536    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:45.709608    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:45.720515    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:45.720584    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:45.731057    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:45.731136    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:45.746521    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:45.746600    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:45.757243    4105 logs.go:282] 0 containers: []
	W1014 07:41:45.757255    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:45.757324    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:45.767853    4105 logs.go:282] 0 containers: []
	W1014 07:41:45.767872    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:45.767880    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:45.767885    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:45.781381    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:45.781392    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:45.797543    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:45.797554    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:45.812462    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:45.812473    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:45.824382    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:45.824393    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:45.838711    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:45.838722    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:45.862991    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:45.863000    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:45.866985    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:45.866999    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:45.898906    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:45.898917    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:45.938528    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:45.938537    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:45.952871    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:45.952883    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:45.967449    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:45.967459    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:45.984426    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:45.984436    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:45.996748    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:45.996759    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:46.038450    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:46.038462    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:48.558920    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:41:53.561236    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:41:53.561416    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:41:53.574805    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:41:53.574895    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:41:53.586281    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:41:53.586354    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:41:53.596577    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:41:53.596660    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:41:53.607420    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:41:53.607490    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:41:53.618134    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:41:53.618216    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:41:53.628293    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:41:53.628361    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:41:53.639200    4105 logs.go:282] 0 containers: []
	W1014 07:41:53.639215    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:41:53.639284    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:41:53.650022    4105 logs.go:282] 0 containers: []
	W1014 07:41:53.650031    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:41:53.650039    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:41:53.650044    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:41:53.661635    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:41:53.661649    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:41:53.677810    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:41:53.677821    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:41:53.689414    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:41:53.689424    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:41:53.715496    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:41:53.715511    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:41:53.729658    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:41:53.729668    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:41:53.741581    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:41:53.741597    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:41:53.778134    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:41:53.778144    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:41:53.814672    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:41:53.814683    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:41:53.826449    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:41:53.826463    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:41:53.843730    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:41:53.843741    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:41:53.848283    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:41:53.848290    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:41:53.862395    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:41:53.862404    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:41:53.886346    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:41:53.886357    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:41:53.910630    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:41:53.910640    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:41:56.426242    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:01.428402    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:01.428659    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:01.447301    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:01.447404    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:01.461168    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:01.461258    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:01.472381    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:01.472459    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:01.482791    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:01.482882    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:01.494008    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:01.494086    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:01.505023    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:01.505110    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:01.515383    4105 logs.go:282] 0 containers: []
	W1014 07:42:01.515396    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:01.515468    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:01.525860    4105 logs.go:282] 0 containers: []
	W1014 07:42:01.525870    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:01.525879    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:01.525884    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:01.530470    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:01.530477    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:01.542704    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:01.542714    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:01.567171    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:01.567179    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:01.578847    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:01.578882    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:01.599556    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:01.599569    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:01.617833    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:01.617847    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:01.654782    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:01.654792    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:01.692652    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:01.692665    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:01.723558    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:01.723571    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:01.753488    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:01.753500    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:01.767494    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:01.767507    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:01.782674    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:01.782687    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:01.799852    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:01.799862    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:01.812032    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:01.812044    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:04.329306    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:09.332006    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:09.332439    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:09.365760    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:09.365907    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:09.384502    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:09.384617    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:09.398142    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:09.398235    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:09.410832    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:09.410911    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:09.421509    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:09.421598    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:09.433158    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:09.433245    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:09.443539    4105 logs.go:282] 0 containers: []
	W1014 07:42:09.443551    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:09.443612    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:09.454481    4105 logs.go:282] 0 containers: []
	W1014 07:42:09.454501    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:09.454509    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:09.454515    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:09.479839    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:09.479857    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:09.518914    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:09.518926    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:09.523535    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:09.523543    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:09.558041    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:09.558052    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:09.576173    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:09.576183    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:09.588771    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:09.588783    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:09.613233    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:09.613243    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:09.624701    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:09.624713    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:09.639481    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:09.639494    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:09.653969    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:09.653980    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:09.672970    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:09.672983    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:09.688250    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:09.688260    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:09.700375    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:09.700387    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:09.718708    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:09.718721    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:12.232728    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:17.235100    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:17.235527    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:17.266949    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:17.267104    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:17.286707    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:17.286824    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:17.301143    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:17.301220    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:17.312923    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:17.313002    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:17.328815    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:17.328885    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:17.339687    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:17.339761    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:17.350449    4105 logs.go:282] 0 containers: []
	W1014 07:42:17.350460    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:17.350520    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:17.361512    4105 logs.go:282] 0 containers: []
	W1014 07:42:17.361523    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:17.361530    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:17.361537    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:17.376588    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:17.376602    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:17.392458    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:17.392470    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:17.404260    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:17.404270    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:17.428210    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:17.428219    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:17.439332    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:17.439342    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:17.477418    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:17.477432    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:17.492403    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:17.492415    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:17.506964    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:17.506975    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:17.521316    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:17.521331    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:17.525476    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:17.525485    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:17.561657    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:17.561668    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:17.587268    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:17.587283    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:17.604969    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:17.604980    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:17.619530    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:17.619545    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:20.132658    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:25.134854    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:25.134986    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:25.147338    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:25.147428    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:25.157697    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:25.157776    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:25.167953    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:25.168033    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:25.184848    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:25.184931    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:25.194907    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:25.194982    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:25.205744    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:25.205815    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:25.215998    4105 logs.go:282] 0 containers: []
	W1014 07:42:25.216018    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:25.216080    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:25.226462    4105 logs.go:282] 0 containers: []
	W1014 07:42:25.226474    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:25.226481    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:25.226486    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:25.251428    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:25.251437    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:25.262490    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:25.262501    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:25.278937    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:25.278947    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:25.290577    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:25.290586    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:25.325315    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:25.325329    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:25.339698    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:25.339710    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:25.361851    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:25.361864    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:25.377144    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:25.377156    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:25.391147    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:25.391158    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:25.414278    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:25.414288    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:25.419016    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:25.419023    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:25.433611    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:25.433621    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:25.458516    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:25.458525    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:25.470741    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:25.470755    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:28.012129    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:33.014276    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:33.014537    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:33.035210    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:33.035321    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:33.051020    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:33.051109    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:33.064061    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:33.064138    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:33.081390    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:33.081481    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:33.091979    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:33.092056    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:33.102791    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:33.102869    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:33.113309    4105 logs.go:282] 0 containers: []
	W1014 07:42:33.113321    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:33.113390    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:33.124143    4105 logs.go:282] 0 containers: []
	W1014 07:42:33.124158    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:33.124166    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:33.124171    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:33.164481    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:33.164494    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:33.168798    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:33.168806    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:33.182465    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:33.182476    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:33.198074    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:33.198086    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:33.216641    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:33.216651    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:33.241263    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:33.241273    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:33.252431    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:33.252442    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:33.266183    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:33.266194    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:33.281288    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:33.281298    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:33.293384    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:33.293395    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:33.316276    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:33.316283    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:33.327850    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:33.327862    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:33.363112    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:33.363124    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:33.384235    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:33.384247    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:35.899668    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:40.901858    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:40.902161    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:40.930157    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:40.930307    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:40.948496    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:40.948585    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:40.962292    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:40.962373    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:40.976807    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:40.976894    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:40.987875    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:40.987947    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:40.998958    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:40.999023    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:41.009371    4105 logs.go:282] 0 containers: []
	W1014 07:42:41.009382    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:41.009449    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:41.019276    4105 logs.go:282] 0 containers: []
	W1014 07:42:41.019289    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:41.019296    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:41.019302    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:41.041517    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:41.041529    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:41.068932    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:41.068943    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:41.080617    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:41.080629    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:41.115737    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:41.115750    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:41.130025    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:41.130034    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:41.141743    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:41.141756    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:41.155353    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:41.155363    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:41.178580    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:41.178590    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:41.182682    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:41.182689    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:41.200391    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:41.200402    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:41.212822    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:41.212832    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:41.224067    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:41.224080    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:41.238452    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:41.238461    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:41.254076    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:41.254106    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:43.793763    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:48.796345    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:48.796771    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:48.828002    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:48.828152    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:48.845988    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:48.846090    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:48.860022    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:48.860110    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:48.871484    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:48.871567    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:48.881972    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:48.882045    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:48.897184    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:48.897262    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:48.907221    4105 logs.go:282] 0 containers: []
	W1014 07:42:48.907233    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:48.907290    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:48.917991    4105 logs.go:282] 0 containers: []
	W1014 07:42:48.918001    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:48.918008    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:48.918013    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:48.931790    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:48.931803    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:48.948920    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:48.948946    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:48.973968    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:48.973978    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:48.984953    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:48.984965    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:49.002484    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:49.002496    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:49.013922    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:49.013931    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:49.026071    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:49.026081    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:49.030410    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:49.030417    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:49.064739    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:49.064750    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:49.079561    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:49.079573    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:49.092847    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:49.092858    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:49.106423    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:49.106434    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:49.129424    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:49.129433    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:49.167699    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:49.167713    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:51.686156    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:42:56.688651    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:42:56.688780    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:42:56.701684    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:42:56.701791    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:42:56.712859    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:42:56.712944    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:42:56.723959    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:42:56.724023    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:42:56.734659    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:42:56.734737    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:42:56.745439    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:42:56.745505    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:42:56.756679    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:42:56.756740    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:42:56.765994    4105 logs.go:282] 0 containers: []
	W1014 07:42:56.766006    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:42:56.766060    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:42:56.782766    4105 logs.go:282] 0 containers: []
	W1014 07:42:56.782777    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:42:56.782786    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:42:56.782791    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:42:56.787664    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:42:56.787670    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:42:56.801916    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:42:56.801929    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:42:56.820392    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:42:56.820404    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:42:56.832489    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:42:56.832500    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:42:56.844481    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:42:56.844491    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:42:56.885363    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:42:56.885376    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:42:56.921799    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:42:56.921809    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:42:56.947590    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:42:56.947605    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:42:56.962972    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:42:56.962983    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:42:56.976776    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:42:56.976786    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:42:56.990701    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:42:56.990711    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:42:57.004839    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:42:57.004854    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:42:57.015699    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:42:57.015711    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:42:57.038412    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:42:57.038419    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:42:59.551850    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:04.554323    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:04.554564    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:04.570561    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:04.570659    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:04.584059    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:04.584132    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:04.595202    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:04.595266    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:04.606278    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:04.606361    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:04.617506    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:04.617584    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:04.628521    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:04.628595    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:04.639210    4105 logs.go:282] 0 containers: []
	W1014 07:43:04.639222    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:04.639290    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:04.651833    4105 logs.go:282] 0 containers: []
	W1014 07:43:04.651845    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:04.651852    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:04.651857    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:04.688490    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:04.688505    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:04.692636    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:04.692643    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:04.727801    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:04.727814    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:04.742190    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:04.742203    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:04.769386    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:04.769396    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:04.786170    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:04.786179    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:04.800271    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:04.800283    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:04.812469    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:04.812481    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:04.827091    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:04.827104    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:04.841201    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:04.841212    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:04.853144    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:04.853158    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:04.875593    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:04.875603    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:04.890062    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:04.890075    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:04.907858    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:04.907868    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:07.422168    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:12.424794    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:12.424999    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:12.441061    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:12.441151    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:12.453292    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:12.453374    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:12.463855    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:12.463935    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:12.484568    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:12.484643    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:12.496191    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:12.496276    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:12.507096    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:12.507167    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:12.517954    4105 logs.go:282] 0 containers: []
	W1014 07:43:12.517966    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:12.518030    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:12.528637    4105 logs.go:282] 0 containers: []
	W1014 07:43:12.528648    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:12.528655    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:12.528661    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:12.544540    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:12.544550    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:12.556658    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:12.556669    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:12.570476    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:12.570487    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:12.582460    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:12.582472    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:12.601027    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:12.601043    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:12.615438    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:12.615448    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:12.651677    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:12.651687    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:12.663485    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:12.663496    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:12.684653    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:12.684664    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:12.707345    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:12.707354    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:12.744222    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:12.744230    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:12.768310    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:12.768324    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:12.782186    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:12.782197    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:12.796882    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:12.796893    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:15.302849    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:20.303444    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:20.303593    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:20.315340    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:20.315429    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:20.325994    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:20.326070    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:20.336983    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:20.337072    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:20.348359    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:20.348447    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:20.358725    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:20.358803    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:20.369600    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:20.369676    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:20.383959    4105 logs.go:282] 0 containers: []
	W1014 07:43:20.383969    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:20.384035    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:20.393958    4105 logs.go:282] 0 containers: []
	W1014 07:43:20.393972    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:20.393979    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:20.393986    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:20.405780    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:20.405790    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:20.430593    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:20.430605    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:20.444201    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:20.444215    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:20.463801    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:20.463812    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:20.481414    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:20.481426    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:20.505305    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:20.505315    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:20.544100    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:20.544108    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:20.557393    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:20.557409    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:20.562245    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:20.562254    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:20.597015    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:20.597026    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:20.611309    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:20.611321    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:20.634393    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:20.634408    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:20.645980    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:20.645993    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:20.657491    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:20.657505    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:23.172211    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:28.174465    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:28.174780    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:28.203413    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:28.203538    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:28.222888    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:28.222991    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:28.235800    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:28.235887    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:28.247449    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:28.247531    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:28.257580    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:28.257660    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:28.267727    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:28.267810    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:28.279834    4105 logs.go:282] 0 containers: []
	W1014 07:43:28.279847    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:28.279922    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:28.290092    4105 logs.go:282] 0 containers: []
	W1014 07:43:28.290102    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:28.290110    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:28.290115    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:28.301906    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:28.301917    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:28.320946    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:28.320956    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:28.337157    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:28.337167    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:28.348973    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:28.348984    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:28.387037    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:28.387045    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:28.409326    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:28.409342    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:28.430334    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:28.430343    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:28.456331    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:28.456342    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:28.468784    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:28.468797    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:28.483358    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:28.483369    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:28.506820    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:28.506829    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:28.510969    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:28.510975    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:28.545350    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:28.545361    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:28.560061    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:28.560073    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:31.078166    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:36.080763    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:36.081126    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:36.113340    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:36.113481    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:36.133648    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:36.133742    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:36.147409    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:36.147504    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:36.160179    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:36.160261    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:36.170730    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:36.170812    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:36.181748    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:36.181831    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:36.192583    4105 logs.go:282] 0 containers: []
	W1014 07:43:36.192597    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:36.192665    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:36.204874    4105 logs.go:282] 0 containers: []
	W1014 07:43:36.204885    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:36.204895    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:36.204901    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:36.219447    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:36.219458    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:36.245100    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:36.245114    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:36.257634    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:36.257646    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:36.272566    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:36.272577    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:36.295776    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:36.295784    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:36.299645    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:36.299651    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:36.335631    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:36.335641    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:36.350549    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:36.350560    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:36.392741    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:36.392756    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:36.432382    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:36.432391    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:36.446627    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:36.446637    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:36.468113    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:36.468125    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:36.482990    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:36.483000    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:36.495118    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:36.495132    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:39.008321    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:44.010421    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:44.010580    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:44.022354    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:44.022445    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:44.033534    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:44.033615    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:44.044730    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:44.044812    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:44.055400    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:44.055491    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:44.065828    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:44.065903    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:44.076335    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:44.076412    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:44.091504    4105 logs.go:282] 0 containers: []
	W1014 07:43:44.091517    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:44.091582    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:44.103169    4105 logs.go:282] 0 containers: []
	W1014 07:43:44.103181    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:44.103191    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:44.103201    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:44.140060    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:44.140071    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:44.179484    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:44.179501    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:44.207646    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:44.207657    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:44.220980    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:44.220991    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:44.235224    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:44.235238    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:44.239803    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:44.239812    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:44.254600    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:44.254617    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:44.272036    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:44.272051    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:44.287900    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:44.287910    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:44.312628    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:44.312644    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:44.328105    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:44.328119    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:44.339785    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:44.339798    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:44.353539    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:44.353550    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:44.373190    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:44.373205    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:46.887256    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:51.889541    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:51.889869    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:51.919560    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:51.919706    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:51.935701    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:51.935804    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:51.949499    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:51.949585    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:51.960472    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:51.960559    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:51.971495    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:51.971572    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:51.981960    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:51.982041    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:51.992648    4105 logs.go:282] 0 containers: []
	W1014 07:43:51.992660    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:51.992730    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:52.003281    4105 logs.go:282] 0 containers: []
	W1014 07:43:52.003292    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:52.003299    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:43:52.003304    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:43:52.007470    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:52.007480    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:52.030732    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:52.030739    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:52.042650    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:43:52.042660    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:43:52.067569    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:43:52.067579    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:43:52.081638    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:52.081648    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:52.096386    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:52.096395    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:52.114113    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:43:52.114123    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:43:52.127948    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:52.127959    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:52.166523    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:43:52.166535    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:43:52.205186    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:43:52.205199    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:43:52.219456    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:52.219466    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:43:52.233894    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:52.233904    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:52.246238    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:43:52.246249    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:43:52.261919    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:43:52.261929    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:43:54.775875    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:43:59.778029    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:43:59.778154    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:43:59.790105    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:43:59.790182    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:43:59.800824    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:43:59.800903    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:43:59.811329    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:43:59.811413    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:43:59.821941    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:43:59.822011    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:43:59.832556    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:43:59.832632    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:43:59.843349    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:43:59.843428    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:43:59.853804    4105 logs.go:282] 0 containers: []
	W1014 07:43:59.853814    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:43:59.853874    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:43:59.864488    4105 logs.go:282] 0 containers: []
	W1014 07:43:59.864500    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:43:59.864507    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:43:59.864513    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:43:59.877161    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:43:59.877171    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:43:59.895078    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:43:59.895094    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:43:59.909634    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:43:59.909649    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:43:59.921438    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:43:59.921450    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:43:59.944123    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:43:59.944132    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:43:59.981752    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:43:59.981761    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:44:00.000044    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:44:00.000058    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:44:00.025175    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:44:00.025189    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:44:00.043149    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:44:00.043160    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:44:00.058140    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:44:00.058153    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:44:00.062252    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:44:00.062260    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:44:00.099841    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:44:00.099854    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:44:00.111562    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:44:00.111573    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:44:00.122879    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:44:00.122891    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:44:02.643742    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:07.653634    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:07.653979    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:44:07.682200    4105 logs.go:282] 2 containers: [ecac423f28b8 75b8f83bcedd]
	I1014 07:44:07.682326    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:44:07.697756    4105 logs.go:282] 2 containers: [00650f131c70 ef8f73ba51dc]
	I1014 07:44:07.697848    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:44:07.709991    4105 logs.go:282] 1 containers: [a1fbcfd811ec]
	I1014 07:44:07.710076    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:44:07.720669    4105 logs.go:282] 2 containers: [aa2e43f43f58 01fe0352d451]
	I1014 07:44:07.720741    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:44:07.733825    4105 logs.go:282] 1 containers: [ac6b1c473cd3]
	I1014 07:44:07.733934    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:44:07.748633    4105 logs.go:282] 2 containers: [8f49dbe5bd49 88a3564ca66c]
	I1014 07:44:07.748717    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:44:07.760371    4105 logs.go:282] 0 containers: []
	W1014 07:44:07.760383    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:44:07.760453    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:44:07.770622    4105 logs.go:282] 0 containers: []
	W1014 07:44:07.770634    4105 logs.go:284] No container was found matching "storage-provisioner"
	I1014 07:44:07.770642    4105 logs.go:123] Gathering logs for coredns [a1fbcfd811ec] ...
	I1014 07:44:07.770647    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1fbcfd811ec"
	I1014 07:44:07.782018    4105 logs.go:123] Gathering logs for kube-scheduler [01fe0352d451] ...
	I1014 07:44:07.782029    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01fe0352d451"
	I1014 07:44:07.797672    4105 logs.go:123] Gathering logs for kube-proxy [ac6b1c473cd3] ...
	I1014 07:44:07.797685    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac6b1c473cd3"
	I1014 07:44:07.810008    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:44:07.810019    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:44:07.836022    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:44:07.836033    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:44:07.849997    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:44:07.850010    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:44:07.890764    4105 logs.go:123] Gathering logs for etcd [ef8f73ba51dc] ...
	I1014 07:44:07.890777    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8f73ba51dc"
	I1014 07:44:07.905587    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:44:07.905599    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:44:07.910060    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:44:07.910066    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:44:07.944758    4105 logs.go:123] Gathering logs for kube-scheduler [aa2e43f43f58] ...
	I1014 07:44:07.944767    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa2e43f43f58"
	I1014 07:44:07.956676    4105 logs.go:123] Gathering logs for kube-controller-manager [88a3564ca66c] ...
	I1014 07:44:07.956687    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a3564ca66c"
	I1014 07:44:07.970872    4105 logs.go:123] Gathering logs for kube-apiserver [75b8f83bcedd] ...
	I1014 07:44:07.970881    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75b8f83bcedd"
	I1014 07:44:08.006481    4105 logs.go:123] Gathering logs for etcd [00650f131c70] ...
	I1014 07:44:08.006492    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00650f131c70"
	I1014 07:44:08.020706    4105 logs.go:123] Gathering logs for kube-apiserver [ecac423f28b8] ...
	I1014 07:44:08.020720    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecac423f28b8"
	I1014 07:44:08.034792    4105 logs.go:123] Gathering logs for kube-controller-manager [8f49dbe5bd49] ...
	I1014 07:44:08.034804    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f49dbe5bd49"
	I1014 07:44:10.556654    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:15.563581    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:15.563724    4105 kubeadm.go:597] duration metric: took 4m3.244681791s to restartPrimaryControlPlane
	W1014 07:44:15.563895    4105 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 07:44:15.563961    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1014 07:44:16.580457    4105 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.015729042s)
	I1014 07:44:16.580528    4105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:44:16.585460    4105 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:44:16.588324    4105 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:44:16.591110    4105 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:44:16.591115    4105 kubeadm.go:157] found existing configuration files:
	
	I1014 07:44:16.591147    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/admin.conf
	I1014 07:44:16.595936    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:44:16.595971    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:44:16.598756    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/kubelet.conf
	I1014 07:44:16.602077    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:44:16.602110    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:44:16.605191    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/controller-manager.conf
	I1014 07:44:16.607834    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:44:16.607862    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:44:16.610743    4105 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/scheduler.conf
	I1014 07:44:16.613845    4105 kubeadm.go:163] "https://control-plane.minikube.internal:61521" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61521 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:44:16.613872    4105 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:44:16.616573    4105 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:44:16.634732    4105 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1014 07:44:16.634761    4105 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:44:16.682651    4105 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:44:16.682709    4105 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:44:16.682753    4105 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 07:44:16.732957    4105 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:44:16.738129    4105 out.go:235]   - Generating certificates and keys ...
	I1014 07:44:16.738254    4105 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:44:16.738440    4105 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:44:16.738491    4105 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 07:44:16.738523    4105 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 07:44:16.738561    4105 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 07:44:16.738591    4105 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 07:44:16.738632    4105 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 07:44:16.738662    4105 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 07:44:16.738699    4105 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 07:44:16.738744    4105 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 07:44:16.738765    4105 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 07:44:16.738795    4105 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:44:16.827466    4105 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:44:16.910164    4105 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:44:17.167559    4105 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:44:17.240156    4105 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:44:17.274826    4105 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:44:17.275247    4105 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:44:17.275273    4105 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:44:17.366266    4105 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:44:17.370366    4105 out.go:235]   - Booting up control plane ...
	I1014 07:44:17.370409    4105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:44:17.370468    4105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:44:17.370564    4105 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:44:17.370651    4105 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:44:17.370885    4105 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 07:44:21.871103    4105 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502145 seconds
	I1014 07:44:21.871178    4105 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:44:21.875756    4105 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:44:22.384193    4105 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:44:22.384342    4105 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-496000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:44:22.890153    4105 kubeadm.go:310] [bootstrap-token] Using token: 7cxkf0.of17tz2v25ggwn3g
	I1014 07:44:22.893762    4105 out.go:235]   - Configuring RBAC rules ...
	I1014 07:44:22.893825    4105 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:44:22.893872    4105 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:44:22.895840    4105 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:44:22.897454    4105 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:44:22.898488    4105 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:44:22.899547    4105 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:44:22.902946    4105 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:44:23.069924    4105 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:44:23.295165    4105 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:44:23.295936    4105 kubeadm.go:310] 
	I1014 07:44:23.295972    4105 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:44:23.295978    4105 kubeadm.go:310] 
	I1014 07:44:23.296022    4105 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:44:23.296027    4105 kubeadm.go:310] 
	I1014 07:44:23.296038    4105 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:44:23.296074    4105 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:44:23.296101    4105 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:44:23.296105    4105 kubeadm.go:310] 
	I1014 07:44:23.296134    4105 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:44:23.296146    4105 kubeadm.go:310] 
	I1014 07:44:23.296170    4105 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:44:23.296175    4105 kubeadm.go:310] 
	I1014 07:44:23.296201    4105 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:44:23.296234    4105 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:44:23.296283    4105 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:44:23.296288    4105 kubeadm.go:310] 
	I1014 07:44:23.296328    4105 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:44:23.296377    4105 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:44:23.296382    4105 kubeadm.go:310] 
	I1014 07:44:23.296421    4105 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7cxkf0.of17tz2v25ggwn3g \
	I1014 07:44:23.296470    4105 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:faabd13cfdf25c259cb25d1f4d857023428bd020fe52b3b863fea78f48891e14 \
	I1014 07:44:23.296481    4105 kubeadm.go:310] 	--control-plane 
	I1014 07:44:23.296486    4105 kubeadm.go:310] 
	I1014 07:44:23.296527    4105 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:44:23.296531    4105 kubeadm.go:310] 
	I1014 07:44:23.296567    4105 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7cxkf0.of17tz2v25ggwn3g \
	I1014 07:44:23.296640    4105 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:faabd13cfdf25c259cb25d1f4d857023428bd020fe52b3b863fea78f48891e14 
	I1014 07:44:23.296846    4105 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:44:23.296940    4105 cni.go:84] Creating CNI manager for ""
	I1014 07:44:23.296950    4105 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:44:23.301475    4105 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 07:44:23.311620    4105 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 07:44:23.318247    4105 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 07:44:23.325148    4105 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:44:23.325220    4105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:44:23.325295    4105 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-496000 minikube.k8s.io/updated_at=2024_10_14T07_44_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=stopped-upgrade-496000 minikube.k8s.io/primary=true
	I1014 07:44:23.362028    4105 ops.go:34] apiserver oom_adj: -16
	I1014 07:44:23.362132    4105 kubeadm.go:1113] duration metric: took 36.961375ms to wait for elevateKubeSystemPrivileges
	I1014 07:44:23.372260    4105 kubeadm.go:394] duration metric: took 4m11.064688s to StartCluster
	I1014 07:44:23.372278    4105 settings.go:142] acquiring lock: {Name:mk5f137d4011ca4bbc3c8514f15406fc4b6b595c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:44:23.372369    4105 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:44:23.372756    4105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/kubeconfig: {Name:mkbe79fce3a1d9ddd6036a978e097f20767985b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:44:23.372929    4105 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:44:23.373032    4105 config.go:182] Loaded profile config "stopped-upgrade-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1014 07:44:23.372959    4105 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:44:23.373089    4105 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-496000"
	I1014 07:44:23.373099    4105 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-496000"
	W1014 07:44:23.373102    4105 addons.go:243] addon storage-provisioner should already be in state true
	I1014 07:44:23.373110    4105 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-496000"
	I1014 07:44:23.373140    4105 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-496000"
	I1014 07:44:23.373114    4105 host.go:66] Checking if "stopped-upgrade-496000" exists ...
	I1014 07:44:23.374412    4105 kapi.go:59] client config for stopped-upgrade-496000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/profiles/stopped-upgrade-496000/client.key", CAFile:"/Users/jenkins/minikube-integration/19790-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1064e6e40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:44:23.374557    4105 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-496000"
	W1014 07:44:23.374563    4105 addons.go:243] addon default-storageclass should already be in state true
	I1014 07:44:23.374570    4105 host.go:66] Checking if "stopped-upgrade-496000" exists ...
	I1014 07:44:23.376466    4105 out.go:177] * Verifying Kubernetes components...
	I1014 07:44:23.376863    4105 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:44:23.380738    4105 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:44:23.380751    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	I1014 07:44:23.384457    4105 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:44:23.388556    4105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:44:23.392551    4105 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:44:23.392558    4105 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:44:23.392566    4105 sshutil.go:53] new ssh client: &{IP:localhost Port:61428 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/stopped-upgrade-496000/id_rsa Username:docker}
	I1014 07:44:23.482356    4105 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:44:23.487703    4105 api_server.go:52] waiting for apiserver process to appear ...
	I1014 07:44:23.487760    4105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:44:23.491906    4105 api_server.go:72] duration metric: took 118.912208ms to wait for apiserver process to appear ...
	I1014 07:44:23.491915    4105 api_server.go:88] waiting for apiserver healthz status ...
	I1014 07:44:23.491923    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:23.511351    4105 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:44:23.532934    4105 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:44:23.874887    4105 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:44:23.874899    4105 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:44:28.495917    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:28.495941    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:33.497545    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:33.497566    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:38.498818    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:38.498839    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:43.499886    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:43.499932    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:48.500915    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:48.500956    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:44:53.502035    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:53.502071    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1014 07:44:53.882779    4105 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1014 07:44:53.887103    4105 out.go:177] * Enabled addons: storage-provisioner
	I1014 07:44:53.898801    4105 addons.go:510] duration metric: took 30.519991834s for enable addons: enabled=[storage-provisioner]
	I1014 07:44:58.503094    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:44:58.503124    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:03.504334    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:03.504354    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:08.506066    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:08.506088    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:13.507785    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:13.507806    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:18.509895    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:18.509919    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:23.512091    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:23.512221    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:23.528257    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:45:23.528339    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:23.539089    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:45:23.539166    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:23.556400    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:45:23.556487    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:23.566771    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:45:23.566847    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:23.577246    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:45:23.577331    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:23.588179    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:45:23.588258    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:23.598711    4105 logs.go:282] 0 containers: []
	W1014 07:45:23.598728    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:23.598795    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:23.609459    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:45:23.609474    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:23.609479    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:23.614284    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:23.614290    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:23.650720    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:45:23.650735    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:45:23.663209    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:45:23.663220    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:45:23.674903    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:45:23.674917    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:45:23.686343    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:23.686359    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:23.711447    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:45:23.711454    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:23.723007    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:23.723020    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:23.762748    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:45:23.762759    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:45:23.777769    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:45:23.777781    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:45:23.792771    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:45:23.792782    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:45:23.804235    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:45:23.804248    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:45:23.831718    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:45:23.831732    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:45:26.350974    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:31.353355    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:31.353596    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:31.376937    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:45:31.377037    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:31.390938    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:45:31.391028    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:31.403450    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:45:31.403537    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:31.414199    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:45:31.414448    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:31.424892    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:45:31.424962    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:31.440354    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:45:31.440417    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:31.450140    4105 logs.go:282] 0 containers: []
	W1014 07:45:31.450149    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:31.450205    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:31.464867    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:45:31.464887    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:31.464891    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:31.489903    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:31.489910    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:31.527448    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:31.527456    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:31.531671    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:31.531678    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:31.567147    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:45:31.567158    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:45:31.581897    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:45:31.581910    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:45:31.594855    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:45:31.594867    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:45:31.610357    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:45:31.610372    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:45:31.627976    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:45:31.627986    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:45:31.642336    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:45:31.642352    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:45:31.657406    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:45:31.657416    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:45:31.673584    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:45:31.673599    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:45:31.685064    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:45:31.685079    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:34.198510    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:39.200842    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:39.201009    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:39.212458    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:45:39.212547    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:39.223235    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:45:39.223317    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:39.237179    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:45:39.237260    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:39.247597    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:45:39.247675    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:39.257746    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:45:39.257836    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:39.268378    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:45:39.268448    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:39.278665    4105 logs.go:282] 0 containers: []
	W1014 07:45:39.278683    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:39.278749    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:39.289397    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:45:39.289412    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:45:39.289418    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:39.301545    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:39.301556    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:39.340713    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:39.340726    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:39.376927    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:45:39.376942    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:45:39.388781    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:45:39.388793    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:45:39.400157    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:45:39.400171    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:45:39.412031    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:45:39.412043    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:45:39.429761    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:39.429773    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:39.453501    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:39.453509    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:39.457936    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:45:39.457944    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:45:39.471976    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:45:39.471987    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:45:39.488404    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:45:39.488414    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:45:39.503535    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:45:39.503546    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:45:42.016853    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:47.019159    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:47.019358    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:47.032554    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:45:47.032649    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:47.043908    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:45:47.043994    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:47.054703    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:45:47.054779    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:47.070476    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:45:47.070564    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:47.081178    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:45:47.081255    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:47.092369    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:45:47.092449    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:47.103566    4105 logs.go:282] 0 containers: []
	W1014 07:45:47.103579    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:47.103650    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:47.114386    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:45:47.114405    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:45:47.114410    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:45:47.126832    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:45:47.126844    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:45:47.138509    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:47.138518    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:47.164159    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:45:47.164176    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:47.180228    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:47.180240    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:47.216933    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:45:47.216945    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:45:47.231413    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:45:47.231424    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:45:47.245619    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:45:47.245627    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:45:47.257746    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:45:47.257757    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:45:47.273113    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:45:47.273125    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:45:47.291037    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:47.291045    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:47.295450    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:47.295456    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:47.332750    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:45:47.332762    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:45:49.851746    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:45:54.854002    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:45:54.854132    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:45:54.866957    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:45:54.867045    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:45:54.878113    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:45:54.878197    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:45:54.888759    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:45:54.888846    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:45:54.899277    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:45:54.899355    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:45:54.909977    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:45:54.910058    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:45:54.920753    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:45:54.920825    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:45:54.930900    4105 logs.go:282] 0 containers: []
	W1014 07:45:54.930910    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:45:54.930972    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:45:54.941776    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:45:54.941794    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:45:54.941800    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:45:54.981746    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:45:54.981755    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:45:55.018173    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:45:55.018187    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:45:55.032588    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:45:55.032600    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:45:55.044094    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:45:55.044104    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:45:55.063321    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:45:55.063333    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:45:55.076041    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:45:55.076056    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:45:55.099822    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:45:55.099836    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:45:55.111486    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:45:55.111500    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:45:55.115595    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:45:55.115600    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:45:55.130121    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:45:55.130138    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:45:55.145369    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:45:55.145383    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:45:55.167200    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:45:55.167212    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:45:57.683964    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:02.686132    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:02.686241    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:02.701702    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:02.701793    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:02.712017    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:02.712100    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:02.722584    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:02.722655    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:02.733616    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:02.733692    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:02.744059    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:02.744133    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:02.754806    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:02.754878    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:02.764967    4105 logs.go:282] 0 containers: []
	W1014 07:46:02.764979    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:02.765051    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:02.775568    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:02.775589    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:02.775594    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:02.787813    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:02.787826    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:02.800550    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:02.800564    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:02.817813    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:02.817825    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:02.841327    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:02.841336    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:02.882173    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:02.882188    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:02.894684    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:02.894697    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:02.908636    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:02.908646    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:02.922731    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:02.922764    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:02.938527    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:02.938544    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:02.951912    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:02.951927    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:02.963503    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:02.963514    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:03.003435    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:03.003444    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:05.509580    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:10.511765    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:10.511884    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:10.523040    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:10.523144    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:10.534721    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:10.534801    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:10.545369    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:10.545453    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:10.556457    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:10.556536    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:10.566314    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:10.566390    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:10.576299    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:10.576380    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:10.589999    4105 logs.go:282] 0 containers: []
	W1014 07:46:10.590013    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:10.590079    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:10.600758    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:10.600774    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:10.600779    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:10.612357    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:10.612370    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:10.628114    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:10.628124    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:10.645837    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:10.645848    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:10.658237    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:10.658249    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:10.698628    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:10.698642    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:10.713365    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:10.713377    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:10.727451    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:10.727462    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:10.746245    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:10.746256    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:10.758218    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:10.758229    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:10.762693    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:10.762699    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:10.822949    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:10.822964    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:10.835085    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:10.835097    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:13.360790    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:18.362996    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:18.363205    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:18.377728    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:18.377811    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:18.388158    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:18.388241    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:18.399222    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:18.399296    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:18.409674    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:18.409753    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:18.424272    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:18.424351    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:18.435296    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:18.435374    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:18.445666    4105 logs.go:282] 0 containers: []
	W1014 07:46:18.445676    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:18.445738    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:18.455768    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:18.455784    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:18.455790    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:18.490924    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:18.490939    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:18.509785    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:18.509795    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:18.521607    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:18.521618    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:18.547569    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:18.547580    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:18.559043    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:18.559057    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:18.597749    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:18.597759    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:18.602279    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:18.602287    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:18.613929    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:18.613940    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:18.625535    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:18.625546    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:18.644495    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:18.644506    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:18.656151    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:18.656162    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:18.671148    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:18.671160    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:21.187268    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:26.189439    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:26.189641    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:26.203524    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:26.203614    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:26.214219    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:26.214294    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:26.224912    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:26.224986    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:26.235078    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:26.235158    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:26.245336    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:26.245419    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:26.255564    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:26.255638    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:26.266127    4105 logs.go:282] 0 containers: []
	W1014 07:46:26.266142    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:26.266203    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:26.276967    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:26.276983    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:26.276989    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:26.312400    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:26.312410    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:26.331020    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:26.331033    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:26.345672    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:26.345682    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:26.357251    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:26.357262    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:26.369296    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:26.369307    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:26.384393    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:26.384405    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:26.396426    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:26.396437    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:26.433036    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:26.433046    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:26.437286    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:26.437292    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:26.454714    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:26.454729    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:26.466730    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:26.466743    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:26.490997    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:26.491007    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:29.005441    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:34.007177    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:34.007402    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:34.022044    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:34.022140    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:34.034063    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:34.034146    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:34.045096    4105 logs.go:282] 2 containers: [c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:34.045173    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:34.056161    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:34.056241    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:34.066343    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:34.066425    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:34.077209    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:34.077289    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:34.087130    4105 logs.go:282] 0 containers: []
	W1014 07:46:34.087144    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:34.087206    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:34.097294    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:34.097315    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:34.097321    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:34.132918    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:34.132930    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:34.147177    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:34.147186    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:34.159320    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:34.159333    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:34.197037    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:34.197050    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:34.211268    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:34.211279    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:34.228354    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:34.228364    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:34.246037    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:34.246053    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:34.262174    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:34.262191    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:34.299013    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:34.299024    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:34.328699    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:34.328711    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:34.354466    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:34.354486    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:34.363514    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:34.363528    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:36.896696    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:41.898850    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:41.899127    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:41.919343    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:41.919448    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:41.932731    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:41.932806    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:41.944452    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:41.944543    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:41.955075    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:41.955163    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:41.965417    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:41.965492    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:41.975568    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:41.975636    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:41.986357    4105 logs.go:282] 0 containers: []
	W1014 07:46:41.986373    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:41.986439    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:41.996849    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:41.996867    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:41.996872    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:42.001428    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:46:42.001447    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:46:42.013898    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:46:42.013909    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:46:42.025326    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:42.025336    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:42.037903    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:42.037914    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:42.049265    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:42.049277    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:42.070826    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:42.070841    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:42.108336    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:42.108347    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:42.122712    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:42.122722    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:42.134466    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:42.134478    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:42.159444    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:42.159458    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:42.195975    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:42.195987    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:42.208142    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:42.208153    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:42.223335    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:42.223346    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:42.235704    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:42.235717    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:44.752329    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:49.754482    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:49.754635    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:49.766968    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:49.767064    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:49.777971    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:49.778047    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:49.789201    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:49.789286    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:49.799866    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:49.799943    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:49.810208    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:49.810274    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:49.825158    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:49.825234    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:49.836351    4105 logs.go:282] 0 containers: []
	W1014 07:46:49.836367    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:49.836432    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:49.847470    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:49.847490    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:46:49.847496    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:46:49.861618    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:49.861628    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:46:49.873792    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:49.873803    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:49.885954    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:49.885964    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:49.909552    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:46:49.909562    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:46:49.921068    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:49.921081    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:49.932807    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:49.932818    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:49.944743    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:49.944755    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:49.949500    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:49.949507    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:49.988637    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:49.988648    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:50.007197    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:50.007207    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:50.032056    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:50.032067    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:50.043623    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:50.043636    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:50.081917    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:50.081926    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:50.095682    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:50.095694    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:52.612900    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:46:57.615140    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:46:57.615339    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:46:57.627730    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:46:57.627811    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:46:57.638249    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:46:57.638335    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:46:57.649505    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:46:57.649587    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:46:57.660342    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:46:57.660428    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:46:57.670850    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:46:57.670931    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:46:57.681651    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:46:57.681736    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:46:57.692191    4105 logs.go:282] 0 containers: []
	W1014 07:46:57.692201    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:46:57.692263    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:46:57.702421    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:46:57.702437    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:46:57.702442    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:46:57.727850    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:46:57.727867    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:46:57.763340    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:46:57.763353    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:46:57.777471    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:46:57.777482    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:46:57.791883    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:46:57.791895    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:46:57.804817    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:46:57.804830    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:46:57.816275    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:46:57.816288    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:46:57.827882    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:46:57.827892    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:46:57.845949    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:46:57.845959    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:46:57.859181    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:46:57.859191    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:46:57.870583    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:46:57.870597    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:46:57.909344    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:46:57.909355    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:46:57.914251    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:46:57.914260    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:46:57.928027    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:46:57.928039    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:46:57.942711    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:46:57.942724    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:00.456289    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:05.458520    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:05.458739    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:05.472091    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:05.472182    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:05.489447    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:05.489525    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:05.500493    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:05.500577    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:05.515546    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:05.515627    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:05.528789    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:05.528869    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:05.539443    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:05.539517    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:05.550993    4105 logs.go:282] 0 containers: []
	W1014 07:47:05.551005    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:05.551068    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:05.561917    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:05.561933    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:05.561939    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:05.574153    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:05.574165    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:05.588915    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:05.588926    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:05.623615    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:05.623628    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:05.640208    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:05.640218    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:05.653039    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:05.653048    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:05.678538    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:05.678546    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:05.690858    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:05.690872    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:05.730481    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:05.730493    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:05.734876    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:05.734884    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:05.747050    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:05.747062    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:05.760508    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:05.760521    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:05.775404    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:05.775414    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:05.793422    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:05.793433    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:05.807519    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:05.807532    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:08.320310    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:13.322503    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:13.322664    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:13.336174    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:13.336261    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:13.354029    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:13.354107    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:13.364980    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:13.365066    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:13.375920    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:13.375998    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:13.386340    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:13.386415    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:13.396749    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:13.396835    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:13.407530    4105 logs.go:282] 0 containers: []
	W1014 07:47:13.407541    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:13.407611    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:13.417910    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:13.417928    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:13.417936    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:13.429941    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:13.429953    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:13.441652    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:13.441670    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:13.455791    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:13.455802    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:13.493782    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:13.493794    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:13.507751    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:13.507765    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:13.542490    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:13.542504    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:13.554881    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:13.554893    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:13.567050    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:13.567066    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:13.590991    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:13.590998    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:13.595135    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:13.595141    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:13.614212    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:13.614223    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:13.626080    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:13.626092    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:13.639319    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:13.639331    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:13.654319    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:13.654329    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:16.180330    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:21.182602    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:21.182784    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:21.194176    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:21.194254    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:21.204528    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:21.204599    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:21.215499    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:21.215581    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:21.230953    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:21.231024    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:21.241712    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:21.241793    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:21.252216    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:21.252305    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:21.262919    4105 logs.go:282] 0 containers: []
	W1014 07:47:21.262930    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:21.262993    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:21.273489    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:21.273507    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:21.273514    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:21.308355    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:21.308369    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:21.319919    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:21.319930    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:21.338662    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:21.338672    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:21.375504    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:21.375513    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:21.398722    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:21.398733    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:21.410371    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:21.410381    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:21.414767    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:21.414774    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:21.425901    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:21.425913    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:21.443702    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:21.443713    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:21.455726    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:21.455737    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:21.467631    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:21.467641    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:21.493465    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:21.493473    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:21.505771    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:21.505785    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:21.520584    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:21.520594    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:24.036089    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:29.038346    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:29.038571    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:29.054775    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:29.054870    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:29.067493    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:29.067573    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:29.078835    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:29.078925    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:29.094669    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:29.094752    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:29.105196    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:29.105276    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:29.116309    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:29.116377    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:29.126467    4105 logs.go:282] 0 containers: []
	W1014 07:47:29.126480    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:29.126546    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:29.137076    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:29.137094    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:29.137100    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:29.148715    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:29.148726    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:29.160870    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:29.160881    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:29.186440    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:29.186452    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:29.221730    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:29.221740    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:29.233975    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:29.233986    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:29.245910    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:29.245920    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:29.257644    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:29.257656    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:29.262300    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:29.262308    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:29.276570    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:29.276579    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:29.288425    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:29.288436    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:29.303528    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:29.303537    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:29.341800    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:29.341808    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:29.355641    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:29.355655    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:29.366802    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:29.366812    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:31.894897    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:36.897127    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:36.897418    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:36.918500    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:36.918612    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:36.934590    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:36.934680    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:36.946420    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:36.946505    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:36.957536    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:36.957622    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:36.972634    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:36.972713    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:36.984061    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:36.984139    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:36.994525    4105 logs.go:282] 0 containers: []
	W1014 07:47:36.994537    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:36.994604    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:37.004944    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:37.004967    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:37.004973    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:37.017212    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:37.017224    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:37.035402    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:37.035413    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:37.073655    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:37.073666    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:37.090269    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:37.090281    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:37.105712    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:37.105722    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:37.123839    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:37.123850    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:37.136278    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:37.136290    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:37.147731    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:37.147745    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:37.169259    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:37.169270    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:37.195344    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:37.195354    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:37.199797    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:37.199805    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:37.235242    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:37.235253    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:37.250053    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:37.250064    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:37.263189    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:37.263199    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:39.776986    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:44.779167    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:44.779360    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:44.796573    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:44.796673    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:44.810487    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:44.810570    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:44.821908    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:44.821994    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:44.833032    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:44.833112    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:44.843719    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:44.843801    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:44.856903    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:44.856978    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:44.867648    4105 logs.go:282] 0 containers: []
	W1014 07:47:44.867659    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:44.867731    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:44.879277    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:44.879296    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:44.879302    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:44.894153    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:44.894164    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:44.906126    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:44.906136    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:44.922084    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:44.922096    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:44.939653    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:44.939663    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:44.951477    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:44.951488    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:44.987498    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:44.987508    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:45.000961    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:45.000972    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:45.013028    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:45.013040    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:45.024554    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:45.024564    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:45.029517    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:45.029524    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:45.043850    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:45.043865    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:45.080660    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:45.080668    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:45.092528    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:45.092541    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:45.105536    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:45.105547    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:47.631954    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:47:52.634186    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:47:52.634404    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:47:52.649468    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:47:52.649567    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:47:52.661862    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:47:52.661940    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:47:52.673191    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:47:52.673277    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:47:52.684780    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:47:52.684851    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:47:52.696155    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:47:52.696241    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:47:52.707496    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:47:52.707571    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:47:52.718571    4105 logs.go:282] 0 containers: []
	W1014 07:47:52.718581    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:47:52.718647    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:47:52.729172    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:47:52.729189    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:47:52.729195    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:47:52.766083    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:47:52.766098    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:47:52.778839    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:47:52.778853    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:47:52.791382    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:47:52.791392    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:47:52.806195    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:47:52.806207    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:47:52.817882    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:47:52.817895    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:47:52.832104    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:47:52.832115    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:47:52.847246    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:47:52.847255    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:47:52.862882    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:47:52.862895    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:47:52.900371    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:47:52.900385    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:47:52.912621    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:47:52.912633    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:47:52.931368    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:47:52.931379    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:47:52.956015    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:47:52.956026    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:47:52.960548    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:47:52.960554    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:47:52.975119    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:47:52.975134    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:47:55.490706    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:48:00.492621    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:48:00.492760    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:48:00.508340    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:48:00.508421    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:48:00.519942    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:48:00.520013    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:48:00.536062    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:48:00.536148    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:48:00.547392    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:48:00.547473    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:48:00.558496    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:48:00.558570    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:48:00.569913    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:48:00.569988    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:48:00.583128    4105 logs.go:282] 0 containers: []
	W1014 07:48:00.583140    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:48:00.583207    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:48:00.599709    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:48:00.599726    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:48:00.599732    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:48:00.615295    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:48:00.615310    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:48:00.627719    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:48:00.627732    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:48:00.640521    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:48:00.640536    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:48:00.656561    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:48:00.656575    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:48:00.669334    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:48:00.669348    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:48:00.684332    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:48:00.684342    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:48:00.698827    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:48:00.698839    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:48:00.713326    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:48:00.713338    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:48:00.718100    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:48:00.718109    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:48:00.754024    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:48:00.754035    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:48:00.766066    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:48:00.766076    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:48:00.778127    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:48:00.778138    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:48:00.797109    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:48:00.797121    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:48:00.822273    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:48:00.822287    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:48:03.363444    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:48:08.365615    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:48:08.365867    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:48:08.397640    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:48:08.397726    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:48:08.411528    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:48:08.411613    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:48:08.423183    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:48:08.423257    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:48:08.434045    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:48:08.434130    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:48:08.445518    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:48:08.445593    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:48:08.456741    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:48:08.456812    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:48:08.468032    4105 logs.go:282] 0 containers: []
	W1014 07:48:08.468043    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:48:08.468118    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:48:08.479168    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:48:08.479185    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:48:08.479191    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:48:08.491537    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:48:08.491548    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:48:08.504835    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:48:08.504847    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:48:08.520384    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:48:08.520394    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:48:08.533436    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:48:08.533451    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:48:08.558453    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:48:08.558460    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:48:08.570711    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:48:08.570726    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:48:08.588369    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:48:08.588380    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:48:08.627343    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:48:08.627351    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:48:08.642236    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:48:08.642249    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:48:08.654373    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:48:08.654383    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:48:08.669040    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:48:08.669050    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:48:08.681217    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:48:08.681229    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:48:08.685923    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:48:08.685930    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:48:08.706496    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:48:08.706506    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:48:11.243657    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:48:16.245919    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:48:16.246121    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 07:48:16.262120    4105 logs.go:282] 1 containers: [87bc8accb53d]
	I1014 07:48:16.262208    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 07:48:16.275001    4105 logs.go:282] 1 containers: [e975d7240ea5]
	I1014 07:48:16.275086    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 07:48:16.289495    4105 logs.go:282] 4 containers: [8e114888d8f6 d6d1c4461f23 c752d866d7d0 9bf421ad2bd9]
	I1014 07:48:16.289582    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 07:48:16.300387    4105 logs.go:282] 1 containers: [0dee1382a8e5]
	I1014 07:48:16.300469    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 07:48:16.315088    4105 logs.go:282] 1 containers: [35e4e80f297e]
	I1014 07:48:16.315167    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 07:48:16.325410    4105 logs.go:282] 1 containers: [61eb11a81bf1]
	I1014 07:48:16.325494    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 07:48:16.335991    4105 logs.go:282] 0 containers: []
	W1014 07:48:16.336006    4105 logs.go:284] No container was found matching "kindnet"
	I1014 07:48:16.336078    4105 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1014 07:48:16.350969    4105 logs.go:282] 1 containers: [5a17904df046]
	I1014 07:48:16.350993    4105 logs.go:123] Gathering logs for coredns [c752d866d7d0] ...
	I1014 07:48:16.350998    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c752d866d7d0"
	I1014 07:48:16.362600    4105 logs.go:123] Gathering logs for coredns [9bf421ad2bd9] ...
	I1014 07:48:16.362611    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf421ad2bd9"
	I1014 07:48:16.374418    4105 logs.go:123] Gathering logs for kube-scheduler [0dee1382a8e5] ...
	I1014 07:48:16.374431    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dee1382a8e5"
	I1014 07:48:16.389206    4105 logs.go:123] Gathering logs for kubelet ...
	I1014 07:48:16.389218    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 07:48:16.428952    4105 logs.go:123] Gathering logs for kube-proxy [35e4e80f297e] ...
	I1014 07:48:16.428963    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e4e80f297e"
	I1014 07:48:16.440768    4105 logs.go:123] Gathering logs for storage-provisioner [5a17904df046] ...
	I1014 07:48:16.440778    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a17904df046"
	I1014 07:48:16.453167    4105 logs.go:123] Gathering logs for coredns [8e114888d8f6] ...
	I1014 07:48:16.453178    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e114888d8f6"
	I1014 07:48:16.464997    4105 logs.go:123] Gathering logs for kube-apiserver [87bc8accb53d] ...
	I1014 07:48:16.465009    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87bc8accb53d"
	I1014 07:48:16.479227    4105 logs.go:123] Gathering logs for describe nodes ...
	I1014 07:48:16.479237    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 07:48:16.516345    4105 logs.go:123] Gathering logs for etcd [e975d7240ea5] ...
	I1014 07:48:16.516356    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e975d7240ea5"
	I1014 07:48:16.530695    4105 logs.go:123] Gathering logs for coredns [d6d1c4461f23] ...
	I1014 07:48:16.530708    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6d1c4461f23"
	I1014 07:48:16.550828    4105 logs.go:123] Gathering logs for kube-controller-manager [61eb11a81bf1] ...
	I1014 07:48:16.550840    4105 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61eb11a81bf1"
	I1014 07:48:16.569022    4105 logs.go:123] Gathering logs for Docker ...
	I1014 07:48:16.569032    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 07:48:16.593662    4105 logs.go:123] Gathering logs for container status ...
	I1014 07:48:16.593672    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 07:48:16.605469    4105 logs.go:123] Gathering logs for dmesg ...
	I1014 07:48:16.605481    4105 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 07:48:19.112035    4105 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1014 07:48:24.114494    4105 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1014 07:48:24.124201    4105 out.go:201] 
	W1014 07:48:24.128661    4105 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1014 07:48:24.128697    4105 out.go:270] * 
	* 
	W1014 07:48:24.130018    4105 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:48:24.144966    4105 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-496000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (618.24s)

                                                
                                    
x
+
TestPause/serial/Start (9.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-917000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-917000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.865719583s)

                                                
                                                
-- stdout --
	* [pause-917000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-917000" primary control-plane node in "pause-917000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-917000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-917000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-917000 -n pause-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-917000 -n pause-917000: exit status 7 (48.187125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-917000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-500000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-500000 --driver=qemu2 : exit status 80 (9.863604209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-500000" primary control-plane node in "NoKubernetes-500000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-500000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-500000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-500000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-500000 -n NoKubernetes-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-500000 -n NoKubernetes-500000: exit status 7 (69.459667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-500000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-500000 --no-kubernetes --driver=qemu2 : exit status 80 (5.248510458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-500000
	* Restarting existing qemu2 VM for "NoKubernetes-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-500000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-500000 -n NoKubernetes-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-500000 -n NoKubernetes-500000: exit status 7 (58.449834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-500000 --no-kubernetes --driver=qemu2 
E1014 07:48:46.518215    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/functional-365000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-500000 --no-kubernetes --driver=qemu2 : exit status 80 (5.255541125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-500000
	* Restarting existing qemu2 VM for "NoKubernetes-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-500000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-500000 -n NoKubernetes-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-500000 -n NoKubernetes-500000: exit status 7 (54.604958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-500000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-500000 --driver=qemu2 : exit status 80 (5.722184666s)

                                                
                                                
-- stdout --
	* [NoKubernetes-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-500000
	* Restarting existing qemu2 VM for "NoKubernetes-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-500000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-500000 -n NoKubernetes-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-500000 -n NoKubernetes-500000: exit status 7 (39.33375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.76s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.78s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19790
- KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1930077197/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.78s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.42s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19790
- KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2036172647/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.756898208s)

                                                
                                                
-- stdout --
	* [auto-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-513000" primary control-plane node in "auto-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:49:44.984618    4672 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:49:44.984773    4672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:49:44.984777    4672 out.go:358] Setting ErrFile to fd 2...
	I1014 07:49:44.984780    4672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:49:44.984918    4672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:49:44.986047    4672 out.go:352] Setting JSON to false
	I1014 07:49:45.003583    4672 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4754,"bootTime":1728912630,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:49:45.003661    4672 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:49:45.009285    4672 out.go:177] * [auto-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:49:45.016417    4672 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:49:45.016465    4672 notify.go:220] Checking for updates...
	I1014 07:49:45.022331    4672 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:49:45.025371    4672 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:49:45.028383    4672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:49:45.031316    4672 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:49:45.034363    4672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:49:45.037739    4672 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:49:45.037815    4672 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:49:45.037864    4672 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:49:45.042295    4672 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:49:45.049242    4672 start.go:297] selected driver: qemu2
	I1014 07:49:45.049249    4672 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:49:45.049255    4672 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:49:45.051723    4672 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:49:45.055332    4672 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:49:45.058420    4672 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:49:45.058434    4672 cni.go:84] Creating CNI manager for ""
	I1014 07:49:45.058453    4672 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:49:45.058457    4672 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:49:45.058486    4672 start.go:340] cluster config:
	{Name:auto-513000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:49:45.062926    4672 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:49:45.071288    4672 out.go:177] * Starting "auto-513000" primary control-plane node in "auto-513000" cluster
	I1014 07:49:45.075180    4672 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:49:45.075198    4672 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:49:45.075208    4672 cache.go:56] Caching tarball of preloaded images
	I1014 07:49:45.075288    4672 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:49:45.075300    4672 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:49:45.075363    4672 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/auto-513000/config.json ...
	I1014 07:49:45.075378    4672 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/auto-513000/config.json: {Name:mkd1d91efbf58e5fe64e746e519cd7253a261408 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:49:45.075625    4672 start.go:360] acquireMachinesLock for auto-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:49:45.075676    4672 start.go:364] duration metric: took 44.917µs to acquireMachinesLock for "auto-513000"
	I1014 07:49:45.075692    4672 start.go:93] Provisioning new machine with config: &{Name:auto-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:49:45.075722    4672 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:49:45.079412    4672 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:49:45.096301    4672 start.go:159] libmachine.API.Create for "auto-513000" (driver="qemu2")
	I1014 07:49:45.096323    4672 client.go:168] LocalClient.Create starting
	I1014 07:49:45.096389    4672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:49:45.096427    4672 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:45.096439    4672 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:45.096473    4672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:49:45.096505    4672 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:45.096513    4672 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:45.096917    4672 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:49:45.253433    4672 main.go:141] libmachine: Creating SSH key...
	I1014 07:49:45.290470    4672 main.go:141] libmachine: Creating Disk image...
	I1014 07:49:45.290476    4672 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:49:45.290695    4672 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2
	I1014 07:49:45.300594    4672 main.go:141] libmachine: STDOUT: 
	I1014 07:49:45.300615    4672 main.go:141] libmachine: STDERR: 
	I1014 07:49:45.300673    4672 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2 +20000M
	I1014 07:49:45.309195    4672 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:49:45.309213    4672 main.go:141] libmachine: STDERR: 
	I1014 07:49:45.309227    4672 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2
	I1014 07:49:45.309235    4672 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:49:45.309252    4672 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:49:45.309277    4672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:e3:5c:a2:12:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2
	I1014 07:49:45.311059    4672 main.go:141] libmachine: STDOUT: 
	I1014 07:49:45.311074    4672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:49:45.311096    4672 client.go:171] duration metric: took 214.767709ms to LocalClient.Create
	I1014 07:49:47.313309    4672 start.go:128] duration metric: took 2.237595666s to createHost
	I1014 07:49:47.313369    4672 start.go:83] releasing machines lock for "auto-513000", held for 2.237712958s
	W1014 07:49:47.313412    4672 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:47.322860    4672 out.go:177] * Deleting "auto-513000" in qemu2 ...
	W1014 07:49:47.351041    4672 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:47.351072    4672 start.go:729] Will try again in 5 seconds ...
	I1014 07:49:52.353237    4672 start.go:360] acquireMachinesLock for auto-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:49:52.353762    4672 start.go:364] duration metric: took 429.708µs to acquireMachinesLock for "auto-513000"
	I1014 07:49:52.353884    4672 start.go:93] Provisioning new machine with config: &{Name:auto-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:49:52.354136    4672 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:49:52.368859    4672 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:49:52.419421    4672 start.go:159] libmachine.API.Create for "auto-513000" (driver="qemu2")
	I1014 07:49:52.419480    4672 client.go:168] LocalClient.Create starting
	I1014 07:49:52.419608    4672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:49:52.419686    4672 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:52.419704    4672 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:52.419764    4672 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:49:52.419820    4672 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:52.419832    4672 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:52.420700    4672 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:49:52.589230    4672 main.go:141] libmachine: Creating SSH key...
	I1014 07:49:52.639582    4672 main.go:141] libmachine: Creating Disk image...
	I1014 07:49:52.639587    4672 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:49:52.639817    4672 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2
	I1014 07:49:52.649678    4672 main.go:141] libmachine: STDOUT: 
	I1014 07:49:52.649702    4672 main.go:141] libmachine: STDERR: 
	I1014 07:49:52.649754    4672 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2 +20000M
	I1014 07:49:52.658208    4672 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:49:52.658240    4672 main.go:141] libmachine: STDERR: 
	I1014 07:49:52.658269    4672 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2
	I1014 07:49:52.658272    4672 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:49:52.658282    4672 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:49:52.658317    4672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:08:d6:fd:79:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/auto-513000/disk.qcow2
	I1014 07:49:52.660118    4672 main.go:141] libmachine: STDOUT: 
	I1014 07:49:52.660132    4672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:49:52.660146    4672 client.go:171] duration metric: took 240.664917ms to LocalClient.Create
	I1014 07:49:54.662321    4672 start.go:128] duration metric: took 2.308181458s to createHost
	I1014 07:49:54.662653    4672 start.go:83] releasing machines lock for "auto-513000", held for 2.3088875s
	W1014 07:49:54.664105    4672 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:54.675787    4672 out.go:201] 
	W1014 07:49:54.679866    4672 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:49:54.679915    4672 out.go:270] * 
	* 
	W1014 07:49:54.682004    4672 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:49:54.697688    4672 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.025179916s)

                                                
                                                
-- stdout --
	* [flannel-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-513000" primary control-plane node in "flannel-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:49:57.045829    4790 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:49:57.045976    4790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:49:57.045979    4790 out.go:358] Setting ErrFile to fd 2...
	I1014 07:49:57.045981    4790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:49:57.046108    4790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:49:57.047235    4790 out.go:352] Setting JSON to false
	I1014 07:49:57.064852    4790 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4767,"bootTime":1728912630,"procs":529,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:49:57.064925    4790 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:49:57.071362    4790 out.go:177] * [flannel-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:49:57.079279    4790 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:49:57.079320    4790 notify.go:220] Checking for updates...
	I1014 07:49:57.083703    4790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:49:57.086265    4790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:49:57.089323    4790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:49:57.092333    4790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:49:57.095346    4790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:49:57.098708    4790 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:49:57.098789    4790 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:49:57.098843    4790 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:49:57.103332    4790 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:49:57.110283    4790 start.go:297] selected driver: qemu2
	I1014 07:49:57.110290    4790 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:49:57.110296    4790 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:49:57.112830    4790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:49:57.116311    4790 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:49:57.119395    4790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:49:57.119421    4790 cni.go:84] Creating CNI manager for "flannel"
	I1014 07:49:57.119425    4790 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1014 07:49:57.119482    4790 start.go:340] cluster config:
	{Name:flannel-513000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:49:57.124137    4790 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:49:57.131215    4790 out.go:177] * Starting "flannel-513000" primary control-plane node in "flannel-513000" cluster
	I1014 07:49:57.135310    4790 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:49:57.135330    4790 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:49:57.135341    4790 cache.go:56] Caching tarball of preloaded images
	I1014 07:49:57.135432    4790 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:49:57.135438    4790 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:49:57.135499    4790 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/flannel-513000/config.json ...
	I1014 07:49:57.135510    4790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/flannel-513000/config.json: {Name:mk5feaf4c8f820a60554e8b7f49ecd65bd716267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:49:57.135911    4790 start.go:360] acquireMachinesLock for flannel-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:49:57.135962    4790 start.go:364] duration metric: took 45µs to acquireMachinesLock for "flannel-513000"
	I1014 07:49:57.135975    4790 start.go:93] Provisioning new machine with config: &{Name:flannel-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:49:57.136011    4790 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:49:57.139246    4790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:49:57.156679    4790 start.go:159] libmachine.API.Create for "flannel-513000" (driver="qemu2")
	I1014 07:49:57.156702    4790 client.go:168] LocalClient.Create starting
	I1014 07:49:57.156774    4790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:49:57.156812    4790 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:57.156829    4790 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:57.156875    4790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:49:57.156903    4790 main.go:141] libmachine: Decoding PEM data...
	I1014 07:49:57.156914    4790 main.go:141] libmachine: Parsing certificate...
	I1014 07:49:57.157269    4790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:49:57.311717    4790 main.go:141] libmachine: Creating SSH key...
	I1014 07:49:57.376711    4790 main.go:141] libmachine: Creating Disk image...
	I1014 07:49:57.376717    4790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:49:57.376936    4790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2
	I1014 07:49:57.387128    4790 main.go:141] libmachine: STDOUT: 
	I1014 07:49:57.387144    4790 main.go:141] libmachine: STDERR: 
	I1014 07:49:57.387205    4790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2 +20000M
	I1014 07:49:57.395654    4790 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:49:57.395683    4790 main.go:141] libmachine: STDERR: 
	I1014 07:49:57.395703    4790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2
	I1014 07:49:57.395709    4790 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:49:57.395719    4790 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:49:57.395745    4790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:82:84:49:cb:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2
	I1014 07:49:57.397571    4790 main.go:141] libmachine: STDOUT: 
	I1014 07:49:57.397584    4790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:49:57.397602    4790 client.go:171] duration metric: took 240.896625ms to LocalClient.Create
	I1014 07:49:59.399795    4790 start.go:128] duration metric: took 2.26378875s to createHost
	I1014 07:49:59.399897    4790 start.go:83] releasing machines lock for "flannel-513000", held for 2.263954875s
	W1014 07:49:59.399955    4790 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:59.412315    4790 out.go:177] * Deleting "flannel-513000" in qemu2 ...
	W1014 07:49:59.442209    4790 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:49:59.442241    4790 start.go:729] Will try again in 5 seconds ...
	I1014 07:50:04.444451    4790 start.go:360] acquireMachinesLock for flannel-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:50:04.445081    4790 start.go:364] duration metric: took 502.708µs to acquireMachinesLock for "flannel-513000"
	I1014 07:50:04.445224    4790 start.go:93] Provisioning new machine with config: &{Name:flannel-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:50:04.445509    4790 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:50:04.460147    4790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:50:04.509306    4790 start.go:159] libmachine.API.Create for "flannel-513000" (driver="qemu2")
	I1014 07:50:04.509352    4790 client.go:168] LocalClient.Create starting
	I1014 07:50:04.509495    4790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:50:04.509572    4790 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:04.509590    4790 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:04.509647    4790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:50:04.509711    4790 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:04.509725    4790 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:04.510264    4790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:50:04.677423    4790 main.go:141] libmachine: Creating SSH key...
	I1014 07:50:04.972171    4790 main.go:141] libmachine: Creating Disk image...
	I1014 07:50:04.972182    4790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:50:04.972449    4790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2
	I1014 07:50:04.982787    4790 main.go:141] libmachine: STDOUT: 
	I1014 07:50:04.982810    4790 main.go:141] libmachine: STDERR: 
	I1014 07:50:04.982869    4790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2 +20000M
	I1014 07:50:04.991450    4790 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:50:04.991466    4790 main.go:141] libmachine: STDERR: 
	I1014 07:50:04.991483    4790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2
	I1014 07:50:04.991489    4790 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:50:04.991498    4790 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:50:04.991527    4790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:72:a7:3c:22:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/flannel-513000/disk.qcow2
	I1014 07:50:04.993327    4790 main.go:141] libmachine: STDOUT: 
	I1014 07:50:04.993341    4790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:50:04.993352    4790 client.go:171] duration metric: took 483.999041ms to LocalClient.Create
	I1014 07:50:06.995501    4790 start.go:128] duration metric: took 2.549998833s to createHost
	I1014 07:50:06.995611    4790 start.go:83] releasing machines lock for "flannel-513000", held for 2.550506125s
	W1014 07:50:06.995927    4790 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:07.008624    4790 out.go:201] 
	W1014 07:50:07.012702    4790 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:50:07.012758    4790 out.go:270] * 
	* 
	W1014 07:50:07.015330    4790 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:50:07.025537    4790 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.972711083s)

                                                
                                                
-- stdout --
	* [enable-default-cni-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-513000" primary control-plane node in "enable-default-cni-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:50:09.530776    4911 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:50:09.530915    4911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:50:09.530918    4911 out.go:358] Setting ErrFile to fd 2...
	I1014 07:50:09.530921    4911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:50:09.531044    4911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:50:09.532235    4911 out.go:352] Setting JSON to false
	I1014 07:50:09.549993    4911 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4779,"bootTime":1728912630,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:50:09.550071    4911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:50:09.555718    4911 out.go:177] * [enable-default-cni-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:50:09.559686    4911 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:50:09.559729    4911 notify.go:220] Checking for updates...
	I1014 07:50:09.565640    4911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:50:09.567025    4911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:50:09.569598    4911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:50:09.572685    4911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:50:09.575638    4911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:50:09.579061    4911 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:50:09.579140    4911 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:50:09.579188    4911 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:50:09.583648    4911 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:50:09.590592    4911 start.go:297] selected driver: qemu2
	I1014 07:50:09.590598    4911 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:50:09.590603    4911 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:50:09.593012    4911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:50:09.595661    4911 out.go:177] * Automatically selected the socket_vmnet network
	E1014 07:50:09.598783    4911 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1014 07:50:09.598801    4911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:50:09.598816    4911 cni.go:84] Creating CNI manager for "bridge"
	I1014 07:50:09.598822    4911 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:50:09.598854    4911 start.go:340] cluster config:
	{Name:enable-default-cni-513000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:50:09.603512    4911 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:50:09.611614    4911 out.go:177] * Starting "enable-default-cni-513000" primary control-plane node in "enable-default-cni-513000" cluster
	I1014 07:50:09.615635    4911 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:50:09.615653    4911 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:50:09.615664    4911 cache.go:56] Caching tarball of preloaded images
	I1014 07:50:09.615754    4911 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:50:09.615760    4911 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:50:09.615829    4911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/enable-default-cni-513000/config.json ...
	I1014 07:50:09.615841    4911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/enable-default-cni-513000/config.json: {Name:mk7a48fcc3f009e9aee10707766ecad6d8cdc951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:50:09.616237    4911 start.go:360] acquireMachinesLock for enable-default-cni-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:50:09.616293    4911 start.go:364] duration metric: took 45.292µs to acquireMachinesLock for "enable-default-cni-513000"
	I1014 07:50:09.616306    4911 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:50:09.616332    4911 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:50:09.620637    4911 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:50:09.637960    4911 start.go:159] libmachine.API.Create for "enable-default-cni-513000" (driver="qemu2")
	I1014 07:50:09.637989    4911 client.go:168] LocalClient.Create starting
	I1014 07:50:09.638051    4911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:50:09.638087    4911 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:09.638099    4911 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:09.638133    4911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:50:09.638161    4911 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:09.638168    4911 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:09.638646    4911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:50:09.794819    4911 main.go:141] libmachine: Creating SSH key...
	I1014 07:50:09.991365    4911 main.go:141] libmachine: Creating Disk image...
	I1014 07:50:09.991374    4911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:50:09.991620    4911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2
	I1014 07:50:10.001952    4911 main.go:141] libmachine: STDOUT: 
	I1014 07:50:10.001969    4911 main.go:141] libmachine: STDERR: 
	I1014 07:50:10.002030    4911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2 +20000M
	I1014 07:50:10.010459    4911 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:50:10.010485    4911 main.go:141] libmachine: STDERR: 
	I1014 07:50:10.010502    4911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2
	I1014 07:50:10.010508    4911 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:50:10.010521    4911 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:50:10.010548    4911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:3f:af:c2:6d:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2
	I1014 07:50:10.012387    4911 main.go:141] libmachine: STDOUT: 
	I1014 07:50:10.012406    4911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:50:10.012427    4911 client.go:171] duration metric: took 374.43525ms to LocalClient.Create
	I1014 07:50:12.014584    4911 start.go:128] duration metric: took 2.398262875s to createHost
	I1014 07:50:12.014644    4911 start.go:83] releasing machines lock for "enable-default-cni-513000", held for 2.398362167s
	W1014 07:50:12.014742    4911 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:12.025840    4911 out.go:177] * Deleting "enable-default-cni-513000" in qemu2 ...
	W1014 07:50:12.056479    4911 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:12.056506    4911 start.go:729] Will try again in 5 seconds ...
	I1014 07:50:17.058652    4911 start.go:360] acquireMachinesLock for enable-default-cni-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:50:17.059292    4911 start.go:364] duration metric: took 527.667µs to acquireMachinesLock for "enable-default-cni-513000"
	I1014 07:50:17.059436    4911 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:50:17.059761    4911 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:50:17.075278    4911 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:50:17.124483    4911 start.go:159] libmachine.API.Create for "enable-default-cni-513000" (driver="qemu2")
	I1014 07:50:17.124553    4911 client.go:168] LocalClient.Create starting
	I1014 07:50:17.124710    4911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:50:17.124800    4911 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:17.124823    4911 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:17.124906    4911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:50:17.124963    4911 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:17.124981    4911 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:17.125682    4911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:50:17.291684    4911 main.go:141] libmachine: Creating SSH key...
	I1014 07:50:17.399464    4911 main.go:141] libmachine: Creating Disk image...
	I1014 07:50:17.399471    4911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:50:17.399691    4911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2
	I1014 07:50:17.409733    4911 main.go:141] libmachine: STDOUT: 
	I1014 07:50:17.409750    4911 main.go:141] libmachine: STDERR: 
	I1014 07:50:17.409815    4911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2 +20000M
	I1014 07:50:17.418236    4911 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:50:17.418257    4911 main.go:141] libmachine: STDERR: 
	I1014 07:50:17.418269    4911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2
	I1014 07:50:17.418273    4911 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:50:17.418285    4911 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:50:17.418321    4911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:9d:5f:94:57:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/enable-default-cni-513000/disk.qcow2
	I1014 07:50:17.420107    4911 main.go:141] libmachine: STDOUT: 
	I1014 07:50:17.420120    4911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:50:17.420135    4911 client.go:171] duration metric: took 295.570459ms to LocalClient.Create
	I1014 07:50:19.422283    4911 start.go:128] duration metric: took 2.362524208s to createHost
	I1014 07:50:19.422342    4911 start.go:83] releasing machines lock for "enable-default-cni-513000", held for 2.363053833s
	W1014 07:50:19.422724    4911 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:19.436375    4911 out.go:201] 
	W1014 07:50:19.441517    4911 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:50:19.441562    4911 out.go:270] * 
	* 
	W1014 07:50:19.444229    4911 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:50:19.456298    4911 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.756206292s)

                                                
                                                
-- stdout --
	* [kindnet-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-513000" primary control-plane node in "kindnet-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:50:21.836536    5022 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:50:21.837345    5022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:50:21.837394    5022 out.go:358] Setting ErrFile to fd 2...
	I1014 07:50:21.837402    5022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:50:21.837724    5022 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:50:21.839187    5022 out.go:352] Setting JSON to false
	I1014 07:50:21.857148    5022 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4791,"bootTime":1728912630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:50:21.857235    5022 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:50:21.863720    5022 out.go:177] * [kindnet-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:50:21.870628    5022 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:50:21.870682    5022 notify.go:220] Checking for updates...
	I1014 07:50:21.876557    5022 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:50:21.879631    5022 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:50:21.882672    5022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:50:21.885562    5022 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:50:21.888648    5022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:50:21.892010    5022 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:50:21.892093    5022 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:50:21.892149    5022 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:50:21.895641    5022 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:50:21.902648    5022 start.go:297] selected driver: qemu2
	I1014 07:50:21.902657    5022 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:50:21.902664    5022 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:50:21.905210    5022 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:50:21.916227    5022 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:50:21.919698    5022 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:50:21.919720    5022 cni.go:84] Creating CNI manager for "kindnet"
	I1014 07:50:21.919725    5022 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:50:21.919756    5022 start.go:340] cluster config:
	{Name:kindnet-513000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:50:21.924758    5022 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:50:21.933608    5022 out.go:177] * Starting "kindnet-513000" primary control-plane node in "kindnet-513000" cluster
	I1014 07:50:21.937630    5022 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:50:21.937650    5022 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:50:21.937661    5022 cache.go:56] Caching tarball of preloaded images
	I1014 07:50:21.937751    5022 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:50:21.937757    5022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:50:21.937823    5022 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/kindnet-513000/config.json ...
	I1014 07:50:21.937839    5022 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/kindnet-513000/config.json: {Name:mk294a12d51588d15af746d7a199c00550880c27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:50:21.938243    5022 start.go:360] acquireMachinesLock for kindnet-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:50:21.938294    5022 start.go:364] duration metric: took 45µs to acquireMachinesLock for "kindnet-513000"
	I1014 07:50:21.938307    5022 start.go:93] Provisioning new machine with config: &{Name:kindnet-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:50:21.938344    5022 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:50:21.942621    5022 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:50:21.960760    5022 start.go:159] libmachine.API.Create for "kindnet-513000" (driver="qemu2")
	I1014 07:50:21.960781    5022 client.go:168] LocalClient.Create starting
	I1014 07:50:21.960854    5022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:50:21.960897    5022 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:21.960909    5022 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:21.960949    5022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:50:21.960980    5022 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:21.960989    5022 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:21.961448    5022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:50:22.117804    5022 main.go:141] libmachine: Creating SSH key...
	I1014 07:50:22.165178    5022 main.go:141] libmachine: Creating Disk image...
	I1014 07:50:22.165185    5022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:50:22.165404    5022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2
	I1014 07:50:22.175209    5022 main.go:141] libmachine: STDOUT: 
	I1014 07:50:22.175232    5022 main.go:141] libmachine: STDERR: 
	I1014 07:50:22.175289    5022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2 +20000M
	I1014 07:50:22.183683    5022 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:50:22.183698    5022 main.go:141] libmachine: STDERR: 
	I1014 07:50:22.183717    5022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2
	I1014 07:50:22.183724    5022 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:50:22.183735    5022 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:50:22.183764    5022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:73:f3:44:c5:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2
	I1014 07:50:22.185493    5022 main.go:141] libmachine: STDOUT: 
	I1014 07:50:22.185504    5022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:50:22.185525    5022 client.go:171] duration metric: took 224.739708ms to LocalClient.Create
	I1014 07:50:24.187707    5022 start.go:128] duration metric: took 2.24936425s to createHost
	I1014 07:50:24.187791    5022 start.go:83] releasing machines lock for "kindnet-513000", held for 2.249512875s
	W1014 07:50:24.187913    5022 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:24.202110    5022 out.go:177] * Deleting "kindnet-513000" in qemu2 ...
	W1014 07:50:24.227760    5022 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:24.227794    5022 start.go:729] Will try again in 5 seconds ...
	I1014 07:50:29.229985    5022 start.go:360] acquireMachinesLock for kindnet-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:50:29.230628    5022 start.go:364] duration metric: took 528.708µs to acquireMachinesLock for "kindnet-513000"
	I1014 07:50:29.230776    5022 start.go:93] Provisioning new machine with config: &{Name:kindnet-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:50:29.231042    5022 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:50:29.246774    5022 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:50:29.294180    5022 start.go:159] libmachine.API.Create for "kindnet-513000" (driver="qemu2")
	I1014 07:50:29.294228    5022 client.go:168] LocalClient.Create starting
	I1014 07:50:29.294352    5022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:50:29.294427    5022 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:29.294443    5022 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:29.294521    5022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:50:29.294578    5022 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:29.294589    5022 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:29.295149    5022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:50:29.461638    5022 main.go:141] libmachine: Creating SSH key...
	I1014 07:50:29.497816    5022 main.go:141] libmachine: Creating Disk image...
	I1014 07:50:29.497822    5022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:50:29.498055    5022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2
	I1014 07:50:29.507926    5022 main.go:141] libmachine: STDOUT: 
	I1014 07:50:29.507946    5022 main.go:141] libmachine: STDERR: 
	I1014 07:50:29.508002    5022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2 +20000M
	I1014 07:50:29.516322    5022 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:50:29.516338    5022 main.go:141] libmachine: STDERR: 
	I1014 07:50:29.516355    5022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2
	I1014 07:50:29.516359    5022 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:50:29.516370    5022 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:50:29.516397    5022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:be:a2:6a:98:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kindnet-513000/disk.qcow2
	I1014 07:50:29.518146    5022 main.go:141] libmachine: STDOUT: 
	I1014 07:50:29.518159    5022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:50:29.518171    5022 client.go:171] duration metric: took 223.942167ms to LocalClient.Create
	I1014 07:50:31.520324    5022 start.go:128] duration metric: took 2.289280542s to createHost
	I1014 07:50:31.520408    5022 start.go:83] releasing machines lock for "kindnet-513000", held for 2.289759833s
	W1014 07:50:31.520792    5022 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:31.533422    5022 out.go:201] 
	W1014 07:50:31.537530    5022 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:50:31.537589    5022 out.go:270] * 
	* 
	W1014 07:50:31.540178    5022 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:50:31.547364    5022 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.889411917s)

                                                
                                                
-- stdout --
	* [bridge-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-513000" primary control-plane node in "bridge-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:50:34.030827    5139 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:50:34.031001    5139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:50:34.031004    5139 out.go:358] Setting ErrFile to fd 2...
	I1014 07:50:34.031006    5139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:50:34.031146    5139 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:50:34.032293    5139 out.go:352] Setting JSON to false
	I1014 07:50:34.049784    5139 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4804,"bootTime":1728912630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:50:34.049857    5139 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:50:34.056017    5139 out.go:177] * [bridge-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:50:34.064054    5139 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:50:34.064105    5139 notify.go:220] Checking for updates...
	I1014 07:50:34.070044    5139 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:50:34.072960    5139 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:50:34.076027    5139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:50:34.079049    5139 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:50:34.082024    5139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:50:34.085415    5139 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:50:34.085494    5139 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:50:34.085545    5139 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:50:34.090047    5139 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:50:34.102011    5139 start.go:297] selected driver: qemu2
	I1014 07:50:34.102018    5139 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:50:34.102024    5139 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:50:34.104601    5139 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:50:34.108094    5139 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:50:34.111092    5139 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:50:34.111110    5139 cni.go:84] Creating CNI manager for "bridge"
	I1014 07:50:34.111114    5139 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:50:34.111142    5139 start.go:340] cluster config:
	{Name:bridge-513000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:50:34.116212    5139 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:50:34.123907    5139 out.go:177] * Starting "bridge-513000" primary control-plane node in "bridge-513000" cluster
	I1014 07:50:34.128003    5139 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:50:34.128018    5139 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:50:34.128027    5139 cache.go:56] Caching tarball of preloaded images
	I1014 07:50:34.128105    5139 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:50:34.128111    5139 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:50:34.128171    5139 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/bridge-513000/config.json ...
	I1014 07:50:34.128182    5139 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/bridge-513000/config.json: {Name:mkbb178c36177022bb8f2d8748e77e21e347aa65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:50:34.128590    5139 start.go:360] acquireMachinesLock for bridge-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:50:34.128641    5139 start.go:364] duration metric: took 44.958µs to acquireMachinesLock for "bridge-513000"
	I1014 07:50:34.128654    5139 start.go:93] Provisioning new machine with config: &{Name:bridge-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:50:34.128680    5139 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:50:34.132853    5139 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:50:34.150250    5139 start.go:159] libmachine.API.Create for "bridge-513000" (driver="qemu2")
	I1014 07:50:34.150281    5139 client.go:168] LocalClient.Create starting
	I1014 07:50:34.150351    5139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:50:34.150390    5139 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:34.150409    5139 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:34.150443    5139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:50:34.150471    5139 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:34.150478    5139 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:34.150917    5139 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:50:34.308048    5139 main.go:141] libmachine: Creating SSH key...
	I1014 07:50:34.420456    5139 main.go:141] libmachine: Creating Disk image...
	I1014 07:50:34.420462    5139 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:50:34.420689    5139 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2
	I1014 07:50:34.430826    5139 main.go:141] libmachine: STDOUT: 
	I1014 07:50:34.430852    5139 main.go:141] libmachine: STDERR: 
	I1014 07:50:34.430907    5139 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2 +20000M
	I1014 07:50:34.439363    5139 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:50:34.439379    5139 main.go:141] libmachine: STDERR: 
	I1014 07:50:34.439399    5139 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2
	I1014 07:50:34.439405    5139 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:50:34.439418    5139 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:50:34.439460    5139 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:66:a9:35:38:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2
	I1014 07:50:34.441301    5139 main.go:141] libmachine: STDOUT: 
	I1014 07:50:34.441314    5139 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:50:34.441332    5139 client.go:171] duration metric: took 291.048833ms to LocalClient.Create
	I1014 07:50:36.443521    5139 start.go:128] duration metric: took 2.3148425s to createHost
	I1014 07:50:36.443616    5139 start.go:83] releasing machines lock for "bridge-513000", held for 2.31499525s
	W1014 07:50:36.443723    5139 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:36.456116    5139 out.go:177] * Deleting "bridge-513000" in qemu2 ...
	W1014 07:50:36.489402    5139 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:36.489437    5139 start.go:729] Will try again in 5 seconds ...
	I1014 07:50:41.491553    5139 start.go:360] acquireMachinesLock for bridge-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:50:41.492255    5139 start.go:364] duration metric: took 528.459µs to acquireMachinesLock for "bridge-513000"
	I1014 07:50:41.492464    5139 start.go:93] Provisioning new machine with config: &{Name:bridge-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:50:41.492810    5139 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:50:41.508645    5139 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:50:41.557116    5139 start.go:159] libmachine.API.Create for "bridge-513000" (driver="qemu2")
	I1014 07:50:41.557165    5139 client.go:168] LocalClient.Create starting
	I1014 07:50:41.557305    5139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:50:41.557390    5139 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:41.557408    5139 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:41.557472    5139 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:50:41.557540    5139 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:41.557555    5139 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:41.558367    5139 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:50:41.727703    5139 main.go:141] libmachine: Creating SSH key...
	I1014 07:50:41.818560    5139 main.go:141] libmachine: Creating Disk image...
	I1014 07:50:41.818567    5139 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:50:41.818726    5139 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2
	I1014 07:50:41.829172    5139 main.go:141] libmachine: STDOUT: 
	I1014 07:50:41.829194    5139 main.go:141] libmachine: STDERR: 
	I1014 07:50:41.829254    5139 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2 +20000M
	I1014 07:50:41.837661    5139 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:50:41.837677    5139 main.go:141] libmachine: STDERR: 
	I1014 07:50:41.837689    5139 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2
	I1014 07:50:41.837693    5139 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:50:41.837701    5139 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:50:41.837733    5139 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:32:2f:b1:c1:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/bridge-513000/disk.qcow2
	I1014 07:50:41.839485    5139 main.go:141] libmachine: STDOUT: 
	I1014 07:50:41.839501    5139 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:50:41.839515    5139 client.go:171] duration metric: took 282.347167ms to LocalClient.Create
	I1014 07:50:43.841722    5139 start.go:128] duration metric: took 2.348899292s to createHost
	I1014 07:50:43.841815    5139 start.go:83] releasing machines lock for "bridge-513000", held for 2.349531583s
	W1014 07:50:43.842206    5139 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:43.857132    5139 out.go:201] 
	W1014 07:50:43.862127    5139 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:50:43.862164    5139 out.go:270] * 
	* 
	W1014 07:50:43.864810    5139 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:50:43.872024    5139 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.041453875s)

                                                
                                                
-- stdout --
	* [kubenet-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-513000" primary control-plane node in "kubenet-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:50:46.263852    5253 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:50:46.264004    5253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:50:46.264007    5253 out.go:358] Setting ErrFile to fd 2...
	I1014 07:50:46.264010    5253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:50:46.264113    5253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:50:46.265245    5253 out.go:352] Setting JSON to false
	I1014 07:50:46.282723    5253 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4816,"bootTime":1728912630,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:50:46.282798    5253 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:50:46.287898    5253 out.go:177] * [kubenet-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:50:46.295855    5253 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:50:46.295892    5253 notify.go:220] Checking for updates...
	I1014 07:50:46.303887    5253 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:50:46.306861    5253 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:50:46.308293    5253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:50:46.311827    5253 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:50:46.314924    5253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:50:46.318280    5253 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:50:46.318357    5253 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:50:46.318408    5253 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:50:46.321819    5253 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:50:46.328851    5253 start.go:297] selected driver: qemu2
	I1014 07:50:46.328857    5253 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:50:46.328862    5253 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:50:46.331370    5253 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:50:46.333830    5253 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:50:46.337916    5253 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:50:46.337942    5253 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1014 07:50:46.337971    5253 start.go:340] cluster config:
	{Name:kubenet-513000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:50:46.342609    5253 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:50:46.350809    5253 out.go:177] * Starting "kubenet-513000" primary control-plane node in "kubenet-513000" cluster
	I1014 07:50:46.354881    5253 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:50:46.354898    5253 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:50:46.354910    5253 cache.go:56] Caching tarball of preloaded images
	I1014 07:50:46.354988    5253 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:50:46.354993    5253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:50:46.355062    5253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/kubenet-513000/config.json ...
	I1014 07:50:46.355073    5253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/kubenet-513000/config.json: {Name:mk29b1ee8e7d23dd11c55e116c9af2cc5bbe48a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:50:46.355444    5253 start.go:360] acquireMachinesLock for kubenet-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:50:46.355491    5253 start.go:364] duration metric: took 41.625µs to acquireMachinesLock for "kubenet-513000"
	I1014 07:50:46.355504    5253 start.go:93] Provisioning new machine with config: &{Name:kubenet-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:50:46.355536    5253 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:50:46.358912    5253 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:50:46.375655    5253 start.go:159] libmachine.API.Create for "kubenet-513000" (driver="qemu2")
	I1014 07:50:46.375681    5253 client.go:168] LocalClient.Create starting
	I1014 07:50:46.375743    5253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:50:46.375777    5253 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:46.375792    5253 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:46.375832    5253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:50:46.375860    5253 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:46.375869    5253 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:46.376280    5253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:50:46.532251    5253 main.go:141] libmachine: Creating SSH key...
	I1014 07:50:46.723747    5253 main.go:141] libmachine: Creating Disk image...
	I1014 07:50:46.723755    5253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:50:46.723974    5253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2
	I1014 07:50:46.734528    5253 main.go:141] libmachine: STDOUT: 
	I1014 07:50:46.734545    5253 main.go:141] libmachine: STDERR: 
	I1014 07:50:46.734614    5253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2 +20000M
	I1014 07:50:46.743135    5253 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:50:46.743151    5253 main.go:141] libmachine: STDERR: 
	I1014 07:50:46.743167    5253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2
	I1014 07:50:46.743174    5253 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:50:46.743193    5253 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:50:46.743222    5253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:ad:3c:97:3e:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2
	I1014 07:50:46.745043    5253 main.go:141] libmachine: STDOUT: 
	I1014 07:50:46.745061    5253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:50:46.745088    5253 client.go:171] duration metric: took 369.405292ms to LocalClient.Create
	I1014 07:50:48.747243    5253 start.go:128] duration metric: took 2.391717833s to createHost
	I1014 07:50:48.747329    5253 start.go:83] releasing machines lock for "kubenet-513000", held for 2.391833167s
	W1014 07:50:48.747418    5253 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:48.758794    5253 out.go:177] * Deleting "kubenet-513000" in qemu2 ...
	W1014 07:50:48.792392    5253 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:48.792424    5253 start.go:729] Will try again in 5 seconds ...
	I1014 07:50:53.794613    5253 start.go:360] acquireMachinesLock for kubenet-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:50:53.795168    5253 start.go:364] duration metric: took 425.625µs to acquireMachinesLock for "kubenet-513000"
	I1014 07:50:53.795304    5253 start.go:93] Provisioning new machine with config: &{Name:kubenet-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:50:53.795596    5253 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:50:53.810218    5253 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:50:53.859048    5253 start.go:159] libmachine.API.Create for "kubenet-513000" (driver="qemu2")
	I1014 07:50:53.859102    5253 client.go:168] LocalClient.Create starting
	I1014 07:50:53.859241    5253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:50:53.859326    5253 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:53.859343    5253 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:53.859436    5253 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:50:53.859506    5253 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:53.859523    5253 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:53.860122    5253 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:50:54.026712    5253 main.go:141] libmachine: Creating SSH key...
	I1014 07:50:54.205164    5253 main.go:141] libmachine: Creating Disk image...
	I1014 07:50:54.205171    5253 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:50:54.205390    5253 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2
	I1014 07:50:54.215692    5253 main.go:141] libmachine: STDOUT: 
	I1014 07:50:54.215710    5253 main.go:141] libmachine: STDERR: 
	I1014 07:50:54.215770    5253 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2 +20000M
	I1014 07:50:54.224212    5253 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:50:54.224228    5253 main.go:141] libmachine: STDERR: 
	I1014 07:50:54.224239    5253 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2
	I1014 07:50:54.224243    5253 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:50:54.224255    5253 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:50:54.224299    5253 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:33:2f:5f:b7:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/kubenet-513000/disk.qcow2
	I1014 07:50:54.226127    5253 main.go:141] libmachine: STDOUT: 
	I1014 07:50:54.226140    5253 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:50:54.226153    5253 client.go:171] duration metric: took 367.050583ms to LocalClient.Create
	I1014 07:50:56.228311    5253 start.go:128] duration metric: took 2.432717s to createHost
	I1014 07:50:56.228384    5253 start.go:83] releasing machines lock for "kubenet-513000", held for 2.433220375s
	W1014 07:50:56.228722    5253 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:50:56.243308    5253 out.go:201] 
	W1014 07:50:56.247513    5253 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:50:56.247541    5253 out.go:270] * 
	* 
	W1014 07:50:56.249983    5253 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:50:56.259339    5253 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.910392834s)

                                                
                                                
-- stdout --
	* [custom-flannel-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-513000" primary control-plane node in "custom-flannel-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:50:58.637479    5362 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:50:58.637645    5362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:50:58.637648    5362 out.go:358] Setting ErrFile to fd 2...
	I1014 07:50:58.637651    5362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:50:58.637791    5362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:50:58.639234    5362 out.go:352] Setting JSON to false
	I1014 07:50:58.657188    5362 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4828,"bootTime":1728912630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:50:58.657262    5362 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:50:58.663001    5362 out.go:177] * [custom-flannel-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:50:58.671052    5362 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:50:58.671120    5362 notify.go:220] Checking for updates...
	I1014 07:50:58.675971    5362 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:50:58.678995    5362 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:50:58.681963    5362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:50:58.684990    5362 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:50:58.687967    5362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:50:58.691435    5362 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:50:58.691516    5362 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:50:58.691572    5362 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:50:58.695933    5362 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:50:58.703018    5362 start.go:297] selected driver: qemu2
	I1014 07:50:58.703026    5362 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:50:58.703032    5362 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:50:58.705595    5362 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:50:58.708966    5362 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:50:58.712023    5362 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:50:58.712039    5362 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1014 07:50:58.712047    5362 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1014 07:50:58.712079    5362 start.go:340] cluster config:
	{Name:custom-flannel-513000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:50:58.716792    5362 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:50:58.723956    5362 out.go:177] * Starting "custom-flannel-513000" primary control-plane node in "custom-flannel-513000" cluster
	I1014 07:50:58.727972    5362 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:50:58.727990    5362 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:50:58.728001    5362 cache.go:56] Caching tarball of preloaded images
	I1014 07:50:58.728087    5362 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:50:58.728097    5362 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:50:58.728178    5362 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/custom-flannel-513000/config.json ...
	I1014 07:50:58.728190    5362 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/custom-flannel-513000/config.json: {Name:mke3bc0f0a6df8e4fc28c1589c934e0b7c7f2259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:50:58.728598    5362 start.go:360] acquireMachinesLock for custom-flannel-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:50:58.728650    5362 start.go:364] duration metric: took 44.875µs to acquireMachinesLock for "custom-flannel-513000"
	I1014 07:50:58.728663    5362 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:50:58.728695    5362 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:50:58.736009    5362 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:50:58.753549    5362 start.go:159] libmachine.API.Create for "custom-flannel-513000" (driver="qemu2")
	I1014 07:50:58.753574    5362 client.go:168] LocalClient.Create starting
	I1014 07:50:58.753647    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:50:58.753686    5362 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:58.753700    5362 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:58.753733    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:50:58.753764    5362 main.go:141] libmachine: Decoding PEM data...
	I1014 07:50:58.753772    5362 main.go:141] libmachine: Parsing certificate...
	I1014 07:50:58.754228    5362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:50:58.910176    5362 main.go:141] libmachine: Creating SSH key...
	I1014 07:50:59.011229    5362 main.go:141] libmachine: Creating Disk image...
	I1014 07:50:59.011235    5362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:50:59.011422    5362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2
	I1014 07:50:59.021392    5362 main.go:141] libmachine: STDOUT: 
	I1014 07:50:59.021410    5362 main.go:141] libmachine: STDERR: 
	I1014 07:50:59.021483    5362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2 +20000M
	I1014 07:50:59.029935    5362 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:50:59.029951    5362 main.go:141] libmachine: STDERR: 
	I1014 07:50:59.029969    5362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2
	I1014 07:50:59.029975    5362 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:50:59.029987    5362 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:50:59.030013    5362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:16:33:1e:34:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2
	I1014 07:50:59.031812    5362 main.go:141] libmachine: STDOUT: 
	I1014 07:50:59.031836    5362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:50:59.031855    5362 client.go:171] duration metric: took 278.276209ms to LocalClient.Create
	I1014 07:51:01.034048    5362 start.go:128] duration metric: took 2.3053535s to createHost
	I1014 07:51:01.034145    5362 start.go:83] releasing machines lock for "custom-flannel-513000", held for 2.305515209s
	W1014 07:51:01.034190    5362 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:01.045750    5362 out.go:177] * Deleting "custom-flannel-513000" in qemu2 ...
	W1014 07:51:01.074366    5362 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:01.074396    5362 start.go:729] Will try again in 5 seconds ...
	I1014 07:51:06.076531    5362 start.go:360] acquireMachinesLock for custom-flannel-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:51:06.077186    5362 start.go:364] duration metric: took 562.583µs to acquireMachinesLock for "custom-flannel-513000"
	I1014 07:51:06.077375    5362 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:51:06.077643    5362 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:51:06.092084    5362 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:51:06.141990    5362 start.go:159] libmachine.API.Create for "custom-flannel-513000" (driver="qemu2")
	I1014 07:51:06.142045    5362 client.go:168] LocalClient.Create starting
	I1014 07:51:06.142190    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:51:06.142273    5362 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:06.142290    5362 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:06.142366    5362 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:51:06.142424    5362 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:06.142442    5362 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:06.142988    5362 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:51:06.313427    5362 main.go:141] libmachine: Creating SSH key...
	I1014 07:51:06.449184    5362 main.go:141] libmachine: Creating Disk image...
	I1014 07:51:06.449192    5362 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:51:06.449403    5362 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2
	I1014 07:51:06.459857    5362 main.go:141] libmachine: STDOUT: 
	I1014 07:51:06.459886    5362 main.go:141] libmachine: STDERR: 
	I1014 07:51:06.459942    5362 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2 +20000M
	I1014 07:51:06.468410    5362 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:51:06.468435    5362 main.go:141] libmachine: STDERR: 
	I1014 07:51:06.468449    5362 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2
	I1014 07:51:06.468454    5362 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:51:06.468467    5362 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:51:06.468504    5362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:6b:dd:c2:b6:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/custom-flannel-513000/disk.qcow2
	I1014 07:51:06.470368    5362 main.go:141] libmachine: STDOUT: 
	I1014 07:51:06.470383    5362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:51:06.470396    5362 client.go:171] duration metric: took 328.347333ms to LocalClient.Create
	I1014 07:51:08.472651    5362 start.go:128] duration metric: took 2.39498925s to createHost
	I1014 07:51:08.472713    5362 start.go:83] releasing machines lock for "custom-flannel-513000", held for 2.395532292s
	W1014 07:51:08.473034    5362 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:08.486578    5362 out.go:201] 
	W1014 07:51:08.490804    5362 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:51:08.490845    5362 out.go:270] * 
	* 
	W1014 07:51:08.493576    5362 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:51:08.501587    5362 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.958478541s)

                                                
                                                
-- stdout --
	* [calico-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-513000" primary control-plane node in "calico-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:51:11.087109    5481 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:51:11.087280    5481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:11.087284    5481 out.go:358] Setting ErrFile to fd 2...
	I1014 07:51:11.087286    5481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:11.087423    5481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:51:11.088575    5481 out.go:352] Setting JSON to false
	I1014 07:51:11.106204    5481 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4841,"bootTime":1728912630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:51:11.106278    5481 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:51:11.111684    5481 out.go:177] * [calico-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:51:11.119687    5481 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:51:11.119747    5481 notify.go:220] Checking for updates...
	I1014 07:51:11.126724    5481 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:51:11.128140    5481 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:51:11.131657    5481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:51:11.134638    5481 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:51:11.137730    5481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:51:11.141113    5481 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:51:11.141197    5481 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:51:11.141255    5481 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:51:11.145671    5481 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:51:11.152613    5481 start.go:297] selected driver: qemu2
	I1014 07:51:11.152622    5481 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:51:11.152630    5481 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:51:11.155248    5481 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:51:11.157633    5481 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:51:11.160739    5481 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:51:11.160758    5481 cni.go:84] Creating CNI manager for "calico"
	I1014 07:51:11.160761    5481 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1014 07:51:11.160799    5481 start.go:340] cluster config:
	{Name:calico-513000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:51:11.165421    5481 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:11.173702    5481 out.go:177] * Starting "calico-513000" primary control-plane node in "calico-513000" cluster
	I1014 07:51:11.177603    5481 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:51:11.177619    5481 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:51:11.177627    5481 cache.go:56] Caching tarball of preloaded images
	I1014 07:51:11.177703    5481 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:51:11.177708    5481 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:51:11.177764    5481 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/calico-513000/config.json ...
	I1014 07:51:11.177775    5481 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/calico-513000/config.json: {Name:mka9c0a256a49a350e00d35aaaf517315833f218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:51:11.178146    5481 start.go:360] acquireMachinesLock for calico-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:51:11.178195    5481 start.go:364] duration metric: took 42.75µs to acquireMachinesLock for "calico-513000"
	I1014 07:51:11.178210    5481 start.go:93] Provisioning new machine with config: &{Name:calico-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:51:11.178250    5481 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:51:11.184596    5481 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:51:11.201057    5481 start.go:159] libmachine.API.Create for "calico-513000" (driver="qemu2")
	I1014 07:51:11.201084    5481 client.go:168] LocalClient.Create starting
	I1014 07:51:11.201154    5481 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:51:11.201189    5481 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:11.201200    5481 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:11.201235    5481 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:51:11.201264    5481 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:11.201275    5481 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:11.201771    5481 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:51:11.358058    5481 main.go:141] libmachine: Creating SSH key...
	I1014 07:51:11.578246    5481 main.go:141] libmachine: Creating Disk image...
	I1014 07:51:11.578257    5481 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:51:11.578527    5481 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2
	I1014 07:51:11.589056    5481 main.go:141] libmachine: STDOUT: 
	I1014 07:51:11.589073    5481 main.go:141] libmachine: STDERR: 
	I1014 07:51:11.589143    5481 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2 +20000M
	I1014 07:51:11.597640    5481 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:51:11.597659    5481 main.go:141] libmachine: STDERR: 
	I1014 07:51:11.597676    5481 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2
	I1014 07:51:11.597688    5481 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:51:11.597701    5481 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:51:11.597723    5481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:ba:ea:59:2e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2
	I1014 07:51:11.599552    5481 main.go:141] libmachine: STDOUT: 
	I1014 07:51:11.599567    5481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:51:11.599587    5481 client.go:171] duration metric: took 398.501375ms to LocalClient.Create
	I1014 07:51:13.601792    5481 start.go:128] duration metric: took 2.423551792s to createHost
	I1014 07:51:13.601843    5481 start.go:83] releasing machines lock for "calico-513000", held for 2.4236685s
	W1014 07:51:13.601892    5481 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:13.613043    5481 out.go:177] * Deleting "calico-513000" in qemu2 ...
	W1014 07:51:13.640354    5481 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:13.640381    5481 start.go:729] Will try again in 5 seconds ...
	I1014 07:51:18.642613    5481 start.go:360] acquireMachinesLock for calico-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:51:18.643126    5481 start.go:364] duration metric: took 410.5µs to acquireMachinesLock for "calico-513000"
	I1014 07:51:18.643224    5481 start.go:93] Provisioning new machine with config: &{Name:calico-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:51:18.643443    5481 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:51:18.654621    5481 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:51:18.705149    5481 start.go:159] libmachine.API.Create for "calico-513000" (driver="qemu2")
	I1014 07:51:18.705194    5481 client.go:168] LocalClient.Create starting
	I1014 07:51:18.705398    5481 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:51:18.705488    5481 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:18.705511    5481 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:18.705572    5481 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:51:18.705632    5481 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:18.705648    5481 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:18.706214    5481 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:51:18.873523    5481 main.go:141] libmachine: Creating SSH key...
	I1014 07:51:18.947348    5481 main.go:141] libmachine: Creating Disk image...
	I1014 07:51:18.947353    5481 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:51:18.947531    5481 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2
	I1014 07:51:18.957348    5481 main.go:141] libmachine: STDOUT: 
	I1014 07:51:18.957364    5481 main.go:141] libmachine: STDERR: 
	I1014 07:51:18.957413    5481 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2 +20000M
	I1014 07:51:18.965921    5481 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:51:18.965936    5481 main.go:141] libmachine: STDERR: 
	I1014 07:51:18.965947    5481 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2
	I1014 07:51:18.965951    5481 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:51:18.965961    5481 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:51:18.965997    5481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:a3:0b:da:c2:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/calico-513000/disk.qcow2
	I1014 07:51:18.967813    5481 main.go:141] libmachine: STDOUT: 
	I1014 07:51:18.967827    5481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:51:18.967839    5481 client.go:171] duration metric: took 262.642875ms to LocalClient.Create
	I1014 07:51:20.969985    5481 start.go:128] duration metric: took 2.326529375s to createHost
	I1014 07:51:20.970050    5481 start.go:83] releasing machines lock for "calico-513000", held for 2.326929791s
	W1014 07:51:20.970526    5481 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:20.984195    5481 out.go:201] 
	W1014 07:51:20.987235    5481 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:51:20.987285    5481 out.go:270] * 
	* 
	W1014 07:51:20.989905    5481 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:51:20.999189    5481 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-513000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.861812833s)

                                                
                                                
-- stdout --
	* [false-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-513000" primary control-plane node in "false-513000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:51:23.578435    5598 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:51:23.578581    5598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:23.578585    5598 out.go:358] Setting ErrFile to fd 2...
	I1014 07:51:23.578588    5598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:23.578716    5598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:51:23.579892    5598 out.go:352] Setting JSON to false
	I1014 07:51:23.597633    5598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4853,"bootTime":1728912630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:51:23.597704    5598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:51:23.604802    5598 out.go:177] * [false-513000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:51:23.612702    5598 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:51:23.612724    5598 notify.go:220] Checking for updates...
	I1014 07:51:23.619869    5598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:51:23.621284    5598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:51:23.625778    5598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:51:23.628793    5598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:51:23.630155    5598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:51:23.633155    5598 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:51:23.633238    5598 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:51:23.633282    5598 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:51:23.637792    5598 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:51:23.642737    5598 start.go:297] selected driver: qemu2
	I1014 07:51:23.642744    5598 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:51:23.642750    5598 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:51:23.645280    5598 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:51:23.648746    5598 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:51:23.651866    5598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:51:23.651880    5598 cni.go:84] Creating CNI manager for "false"
	I1014 07:51:23.651902    5598 start.go:340] cluster config:
	{Name:false-513000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:51:23.656589    5598 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:23.664814    5598 out.go:177] * Starting "false-513000" primary control-plane node in "false-513000" cluster
	I1014 07:51:23.668774    5598 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:51:23.668791    5598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:51:23.668809    5598 cache.go:56] Caching tarball of preloaded images
	I1014 07:51:23.668890    5598 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:51:23.668897    5598 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:51:23.668961    5598 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/false-513000/config.json ...
	I1014 07:51:23.668972    5598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/false-513000/config.json: {Name:mk990c6aed63a4526b8593a7cc9c386249757b49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:51:23.669392    5598 start.go:360] acquireMachinesLock for false-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:51:23.669439    5598 start.go:364] duration metric: took 42µs to acquireMachinesLock for "false-513000"
	I1014 07:51:23.669452    5598 start.go:93] Provisioning new machine with config: &{Name:false-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:51:23.669491    5598 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:51:23.677744    5598 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:51:23.695231    5598 start.go:159] libmachine.API.Create for "false-513000" (driver="qemu2")
	I1014 07:51:23.695254    5598 client.go:168] LocalClient.Create starting
	I1014 07:51:23.695323    5598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:51:23.695361    5598 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:23.695374    5598 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:23.695412    5598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:51:23.695442    5598 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:23.695452    5598 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:23.695830    5598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:51:23.851850    5598 main.go:141] libmachine: Creating SSH key...
	I1014 07:51:23.954158    5598 main.go:141] libmachine: Creating Disk image...
	I1014 07:51:23.954164    5598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:51:23.954350    5598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2
	I1014 07:51:23.964609    5598 main.go:141] libmachine: STDOUT: 
	I1014 07:51:23.964624    5598 main.go:141] libmachine: STDERR: 
	I1014 07:51:23.964689    5598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2 +20000M
	I1014 07:51:23.973198    5598 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:51:23.973219    5598 main.go:141] libmachine: STDERR: 
	I1014 07:51:23.973232    5598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2
	I1014 07:51:23.973237    5598 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:51:23.973247    5598 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:51:23.973277    5598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:7f:f7:3d:30:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2
	I1014 07:51:23.975109    5598 main.go:141] libmachine: STDOUT: 
	I1014 07:51:23.975123    5598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:51:23.975142    5598 client.go:171] duration metric: took 279.881584ms to LocalClient.Create
	I1014 07:51:25.977294    5598 start.go:128] duration metric: took 2.307812625s to createHost
	I1014 07:51:25.977360    5598 start.go:83] releasing machines lock for "false-513000", held for 2.307942084s
	W1014 07:51:25.977419    5598 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:25.988662    5598 out.go:177] * Deleting "false-513000" in qemu2 ...
	W1014 07:51:26.019181    5598 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:26.019207    5598 start.go:729] Will try again in 5 seconds ...
	I1014 07:51:31.021290    5598 start.go:360] acquireMachinesLock for false-513000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:51:31.021871    5598 start.go:364] duration metric: took 503.708µs to acquireMachinesLock for "false-513000"
	I1014 07:51:31.021992    5598 start.go:93] Provisioning new machine with config: &{Name:false-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-513000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:51:31.022373    5598 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:51:31.038091    5598 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 07:51:31.086286    5598 start.go:159] libmachine.API.Create for "false-513000" (driver="qemu2")
	I1014 07:51:31.086326    5598 client.go:168] LocalClient.Create starting
	I1014 07:51:31.086446    5598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:51:31.086519    5598 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:31.086537    5598 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:31.086603    5598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:51:31.086681    5598 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:31.086695    5598 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:31.087295    5598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:51:31.251051    5598 main.go:141] libmachine: Creating SSH key...
	I1014 07:51:31.337050    5598 main.go:141] libmachine: Creating Disk image...
	I1014 07:51:31.337056    5598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:51:31.337253    5598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2
	I1014 07:51:31.347074    5598 main.go:141] libmachine: STDOUT: 
	I1014 07:51:31.347091    5598 main.go:141] libmachine: STDERR: 
	I1014 07:51:31.347144    5598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2 +20000M
	I1014 07:51:31.355577    5598 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:51:31.355596    5598 main.go:141] libmachine: STDERR: 
	I1014 07:51:31.355607    5598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2
	I1014 07:51:31.355622    5598 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:51:31.355631    5598 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:51:31.355656    5598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:bc:ff:34:d4:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/false-513000/disk.qcow2
	I1014 07:51:31.357520    5598 main.go:141] libmachine: STDOUT: 
	I1014 07:51:31.357541    5598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:51:31.357563    5598 client.go:171] duration metric: took 271.235ms to LocalClient.Create
	I1014 07:51:33.359712    5598 start.go:128] duration metric: took 2.3373405s to createHost
	I1014 07:51:33.359784    5598 start.go:83] releasing machines lock for "false-513000", held for 2.337917625s
	W1014 07:51:33.360254    5598 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:33.376693    5598 out.go:201] 
	W1014 07:51:33.380984    5598 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:51:33.381017    5598 out.go:270] * 
	* 
	W1014 07:51:33.383985    5598 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:51:33.392839    5598 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.754739334s)

                                                
                                                
-- stdout --
	* [old-k8s-version-554000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-554000" primary control-plane node in "old-k8s-version-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:51:35.758829    5710 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:51:35.758975    5710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:35.758979    5710 out.go:358] Setting ErrFile to fd 2...
	I1014 07:51:35.758982    5710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:35.759101    5710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:51:35.760243    5710 out.go:352] Setting JSON to false
	I1014 07:51:35.777859    5710 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4865,"bootTime":1728912630,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:51:35.777935    5710 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:51:35.783745    5710 out.go:177] * [old-k8s-version-554000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:51:35.791737    5710 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:51:35.791789    5710 notify.go:220] Checking for updates...
	I1014 07:51:35.797720    5710 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:51:35.800673    5710 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:51:35.803753    5710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:51:35.806693    5710 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:51:35.809673    5710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:51:35.813027    5710 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:51:35.813100    5710 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:51:35.813157    5710 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:51:35.817623    5710 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:51:35.824713    5710 start.go:297] selected driver: qemu2
	I1014 07:51:35.824721    5710 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:51:35.824728    5710 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:51:35.827129    5710 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:51:35.829705    5710 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:51:35.832824    5710 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:51:35.832848    5710 cni.go:84] Creating CNI manager for ""
	I1014 07:51:35.832870    5710 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1014 07:51:35.832915    5710 start.go:340] cluster config:
	{Name:old-k8s-version-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:51:35.837537    5710 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:35.845702    5710 out.go:177] * Starting "old-k8s-version-554000" primary control-plane node in "old-k8s-version-554000" cluster
	I1014 07:51:35.849650    5710 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 07:51:35.849664    5710 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1014 07:51:35.849671    5710 cache.go:56] Caching tarball of preloaded images
	I1014 07:51:35.849741    5710 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:51:35.849747    5710 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1014 07:51:35.849799    5710 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/old-k8s-version-554000/config.json ...
	I1014 07:51:35.849810    5710 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/old-k8s-version-554000/config.json: {Name:mk733a6d9fb7e1d2c0cc825d7edd4abe4eb1aca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:51:35.850046    5710 start.go:360] acquireMachinesLock for old-k8s-version-554000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:51:35.850092    5710 start.go:364] duration metric: took 39.5µs to acquireMachinesLock for "old-k8s-version-554000"
	I1014 07:51:35.850105    5710 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:51:35.850131    5710 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:51:35.853675    5710 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:51:35.869912    5710 start.go:159] libmachine.API.Create for "old-k8s-version-554000" (driver="qemu2")
	I1014 07:51:35.869939    5710 client.go:168] LocalClient.Create starting
	I1014 07:51:35.870007    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:51:35.870043    5710 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:35.870053    5710 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:35.870086    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:51:35.870120    5710 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:35.870126    5710 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:35.870470    5710 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:51:36.026327    5710 main.go:141] libmachine: Creating SSH key...
	I1014 07:51:36.067136    5710 main.go:141] libmachine: Creating Disk image...
	I1014 07:51:36.067142    5710 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:51:36.067329    5710 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2
	I1014 07:51:36.077121    5710 main.go:141] libmachine: STDOUT: 
	I1014 07:51:36.077140    5710 main.go:141] libmachine: STDERR: 
	I1014 07:51:36.077196    5710 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2 +20000M
	I1014 07:51:36.085691    5710 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:51:36.085705    5710 main.go:141] libmachine: STDERR: 
	I1014 07:51:36.085723    5710 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2
	I1014 07:51:36.085728    5710 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:51:36.085739    5710 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:51:36.085766    5710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9d:50:49:16:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2
	I1014 07:51:36.087470    5710 main.go:141] libmachine: STDOUT: 
	I1014 07:51:36.087484    5710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:51:36.087509    5710 client.go:171] duration metric: took 217.56625ms to LocalClient.Create
	I1014 07:51:38.089705    5710 start.go:128] duration metric: took 2.239567417s to createHost
	I1014 07:51:38.089779    5710 start.go:83] releasing machines lock for "old-k8s-version-554000", held for 2.239705709s
	W1014 07:51:38.089852    5710 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:38.101281    5710 out.go:177] * Deleting "old-k8s-version-554000" in qemu2 ...
	W1014 07:51:38.132290    5710 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:38.132323    5710 start.go:729] Will try again in 5 seconds ...
	I1014 07:51:43.134524    5710 start.go:360] acquireMachinesLock for old-k8s-version-554000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:51:43.135110    5710 start.go:364] duration metric: took 475.917µs to acquireMachinesLock for "old-k8s-version-554000"
	I1014 07:51:43.135266    5710 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:51:43.135576    5710 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:51:43.145274    5710 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:51:43.195152    5710 start.go:159] libmachine.API.Create for "old-k8s-version-554000" (driver="qemu2")
	I1014 07:51:43.195199    5710 client.go:168] LocalClient.Create starting
	I1014 07:51:43.195363    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:51:43.195458    5710 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:43.195474    5710 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:43.195544    5710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:51:43.195601    5710 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:43.195613    5710 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:43.196190    5710 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:51:43.364819    5710 main.go:141] libmachine: Creating SSH key...
	I1014 07:51:43.411262    5710 main.go:141] libmachine: Creating Disk image...
	I1014 07:51:43.411268    5710 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:51:43.411474    5710 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2
	I1014 07:51:43.421479    5710 main.go:141] libmachine: STDOUT: 
	I1014 07:51:43.421498    5710 main.go:141] libmachine: STDERR: 
	I1014 07:51:43.421555    5710 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2 +20000M
	I1014 07:51:43.429953    5710 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:51:43.429978    5710 main.go:141] libmachine: STDERR: 
	I1014 07:51:43.429996    5710 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2
	I1014 07:51:43.430001    5710 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:51:43.430009    5710 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:51:43.430042    5710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:fd:20:35:93:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2
	I1014 07:51:43.431845    5710 main.go:141] libmachine: STDOUT: 
	I1014 07:51:43.431859    5710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:51:43.431872    5710 client.go:171] duration metric: took 236.669709ms to LocalClient.Create
	I1014 07:51:45.434037    5710 start.go:128] duration metric: took 2.298460458s to createHost
	I1014 07:51:45.434089    5710 start.go:83] releasing machines lock for "old-k8s-version-554000", held for 2.298982666s
	W1014 07:51:45.434503    5710 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:45.449085    5710 out.go:201] 
	W1014 07:51:45.452173    5710 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:51:45.452200    5710 out.go:270] * 
	* 
	W1014 07:51:45.454955    5710 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:51:45.467022    5710 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (72.635125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-554000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-554000 create -f testdata/busybox.yaml: exit status 1 (29.029917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-554000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-554000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (34.4625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (34.117584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-554000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-554000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-554000 describe deploy/metrics-server -n kube-system: exit status 1 (27.197708ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-554000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-554000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (34.122125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.198065916s)

                                                
                                                
-- stdout --
	* [old-k8s-version-554000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-554000" primary control-plane node in "old-k8s-version-554000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-554000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-554000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:51:49.526068    5760 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:51:49.526226    5760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:49.526230    5760 out.go:358] Setting ErrFile to fd 2...
	I1014 07:51:49.526233    5760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:49.526353    5760 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:51:49.527439    5760 out.go:352] Setting JSON to false
	I1014 07:51:49.544969    5760 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4879,"bootTime":1728912630,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:51:49.545036    5760 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:51:49.549786    5760 out.go:177] * [old-k8s-version-554000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:51:49.556772    5760 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:51:49.556857    5760 notify.go:220] Checking for updates...
	I1014 07:51:49.564702    5760 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:51:49.567757    5760 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:51:49.569204    5760 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:51:49.572748    5760 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:51:49.575709    5760 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:51:49.579007    5760 config.go:182] Loaded profile config "old-k8s-version-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1014 07:51:49.582712    5760 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 07:51:49.585736    5760 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:51:49.589705    5760 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:51:49.596713    5760 start.go:297] selected driver: qemu2
	I1014 07:51:49.596719    5760 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:51:49.596767    5760 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:51:49.599305    5760 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:51:49.599333    5760 cni.go:84] Creating CNI manager for ""
	I1014 07:51:49.599359    5760 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1014 07:51:49.599385    5760 start.go:340] cluster config:
	{Name:old-k8s-version-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-554000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:51:49.603827    5760 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:49.611662    5760 out.go:177] * Starting "old-k8s-version-554000" primary control-plane node in "old-k8s-version-554000" cluster
	I1014 07:51:49.614776    5760 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 07:51:49.614797    5760 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1014 07:51:49.614808    5760 cache.go:56] Caching tarball of preloaded images
	I1014 07:51:49.614901    5760 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:51:49.614907    5760 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1014 07:51:49.614973    5760 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/old-k8s-version-554000/config.json ...
	I1014 07:51:49.615332    5760 start.go:360] acquireMachinesLock for old-k8s-version-554000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:51:49.615365    5760 start.go:364] duration metric: took 25.917µs to acquireMachinesLock for "old-k8s-version-554000"
	I1014 07:51:49.615375    5760 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:51:49.615379    5760 fix.go:54] fixHost starting: 
	I1014 07:51:49.615500    5760 fix.go:112] recreateIfNeeded on old-k8s-version-554000: state=Stopped err=<nil>
	W1014 07:51:49.615508    5760 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:51:49.619727    5760 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-554000" ...
	I1014 07:51:49.627730    5760 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:51:49.627772    5760 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:fd:20:35:93:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2
	I1014 07:51:49.629958    5760 main.go:141] libmachine: STDOUT: 
	I1014 07:51:49.629975    5760 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:51:49.630003    5760 fix.go:56] duration metric: took 14.622166ms for fixHost
	I1014 07:51:49.630007    5760 start.go:83] releasing machines lock for "old-k8s-version-554000", held for 14.637916ms
	W1014 07:51:49.630013    5760 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:51:49.630050    5760 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:49.630054    5760 start.go:729] Will try again in 5 seconds ...
	I1014 07:51:54.632191    5760 start.go:360] acquireMachinesLock for old-k8s-version-554000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:51:54.632592    5760 start.go:364] duration metric: took 328.5µs to acquireMachinesLock for "old-k8s-version-554000"
	I1014 07:51:54.632710    5760 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:51:54.632734    5760 fix.go:54] fixHost starting: 
	I1014 07:51:54.633448    5760 fix.go:112] recreateIfNeeded on old-k8s-version-554000: state=Stopped err=<nil>
	W1014 07:51:54.633478    5760 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:51:54.640769    5760 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-554000" ...
	I1014 07:51:54.644864    5760 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:51:54.645112    5760 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:fd:20:35:93:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/old-k8s-version-554000/disk.qcow2
	I1014 07:51:54.654921    5760 main.go:141] libmachine: STDOUT: 
	I1014 07:51:54.654992    5760 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:51:54.655079    5760 fix.go:56] duration metric: took 22.35ms for fixHost
	I1014 07:51:54.655102    5760 start.go:83] releasing machines lock for "old-k8s-version-554000", held for 22.48025ms
	W1014 07:51:54.655364    5760 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-554000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-554000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:54.662790    5760 out.go:201] 
	W1014 07:51:54.666897    5760 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:51:54.666942    5760 out.go:270] * 
	* 
	W1014 07:51:54.669369    5760 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:51:54.677847    5760 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-554000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (74.893792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-554000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (35.715708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-554000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-554000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-554000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.26275ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-554000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-554000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (34.201125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-554000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (34.29775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-554000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-554000 --alsologtostderr -v=1: exit status 83 (43.312458ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-554000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-554000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:51:54.977078    5779 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:51:54.977470    5779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:54.977473    5779 out.go:358] Setting ErrFile to fd 2...
	I1014 07:51:54.977476    5779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:54.977636    5779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:51:54.977867    5779 out.go:352] Setting JSON to false
	I1014 07:51:54.977876    5779 mustload.go:65] Loading cluster: old-k8s-version-554000
	I1014 07:51:54.978113    5779 config.go:182] Loaded profile config "old-k8s-version-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1014 07:51:54.981045    5779 out.go:177] * The control-plane node old-k8s-version-554000 host is not running: state=Stopped
	I1014 07:51:54.984109    5779 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-554000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-554000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (33.99575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (34.181209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-029000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
E1014 07:51:57.146784    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-029000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.796198583s)

                                                
                                                
-- stdout --
	* [no-preload-029000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-029000" primary control-plane node in "no-preload-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:51:55.321080    5796 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:51:55.321224    5796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:55.321228    5796 out.go:358] Setting ErrFile to fd 2...
	I1014 07:51:55.321231    5796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:51:55.321366    5796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:51:55.322512    5796 out.go:352] Setting JSON to false
	I1014 07:51:55.340180    5796 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4885,"bootTime":1728912630,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:51:55.340277    5796 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:51:55.344187    5796 out.go:177] * [no-preload-029000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:51:55.351096    5796 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:51:55.351159    5796 notify.go:220] Checking for updates...
	I1014 07:51:55.358065    5796 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:51:55.361065    5796 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:51:55.364075    5796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:51:55.366975    5796 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:51:55.370071    5796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:51:55.373412    5796 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:51:55.373474    5796 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:51:55.373544    5796 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:51:55.378018    5796 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:51:55.385069    5796 start.go:297] selected driver: qemu2
	I1014 07:51:55.385078    5796 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:51:55.385086    5796 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:51:55.387613    5796 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:51:55.390971    5796 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:51:55.394124    5796 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:51:55.394146    5796 cni.go:84] Creating CNI manager for ""
	I1014 07:51:55.394180    5796 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:51:55.394189    5796 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:51:55.394234    5796 start.go:340] cluster config:
	{Name:no-preload-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:51:55.398821    5796 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:55.406057    5796 out.go:177] * Starting "no-preload-029000" primary control-plane node in "no-preload-029000" cluster
	I1014 07:51:55.410038    5796 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:51:55.410122    5796 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/no-preload-029000/config.json ...
	I1014 07:51:55.410141    5796 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/no-preload-029000/config.json: {Name:mkbb62a4d51a26c68e82b9d8ab7fece6f570a9dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:51:55.410157    5796 cache.go:107] acquiring lock: {Name:mkeefcaf33444a55b79d7e408fdc59cbd0dc16ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:55.410157    5796 cache.go:107] acquiring lock: {Name:mkfbee7ed24a5bff77ccc82c9584e51a8ba123a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:55.410165    5796 cache.go:107] acquiring lock: {Name:mkebde59ca6c1888e7bc7d15512f03f0354a68ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:55.410185    5796 cache.go:107] acquiring lock: {Name:mk5066bddbc8c04953dc0a3dc8c600b5079a9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:55.410276    5796 cache.go:115] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1014 07:51:55.410286    5796 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 142.917µs
	I1014 07:51:55.410352    5796 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 07:51:55.410359    5796 cache.go:107] acquiring lock: {Name:mk5c8607a7d79ea9ef91ef1406b01d7c0663f552 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:55.410391    5796 cache.go:107] acquiring lock: {Name:mkdbd4614d7c4546dc814133e17ac19bea3d416d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:55.410404    5796 cache.go:107] acquiring lock: {Name:mke3ef47bf1bcfc097138c0ff14aa7ee697119a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:55.410456    5796 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1014 07:51:55.410472    5796 cache.go:107] acquiring lock: {Name:mk56ec1effa451602fda3f8c42c7bd585a870204 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:51:55.410570    5796 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1014 07:51:55.410605    5796 start.go:360] acquireMachinesLock for no-preload-029000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:51:55.410610    5796 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 07:51:55.410347    5796 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 07:51:55.410613    5796 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 07:51:55.410689    5796 start.go:364] duration metric: took 78.625µs to acquireMachinesLock for "no-preload-029000"
	I1014 07:51:55.410709    5796 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1014 07:51:55.410702    5796 start.go:93] Provisioning new machine with config: &{Name:no-preload-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:51:55.410755    5796 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:51:55.410883    5796 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 07:51:55.418927    5796 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:51:55.424139    5796 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 07:51:55.424980    5796 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1014 07:51:55.425063    5796 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1014 07:51:55.425094    5796 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 07:51:55.426951    5796 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 07:51:55.427089    5796 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 07:51:55.427175    5796 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 07:51:55.436440    5796 start.go:159] libmachine.API.Create for "no-preload-029000" (driver="qemu2")
	I1014 07:51:55.436462    5796 client.go:168] LocalClient.Create starting
	I1014 07:51:55.436537    5796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:51:55.436572    5796 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:55.436588    5796 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:55.436629    5796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:51:55.436657    5796 main.go:141] libmachine: Decoding PEM data...
	I1014 07:51:55.436666    5796 main.go:141] libmachine: Parsing certificate...
	I1014 07:51:55.437029    5796 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:51:55.597467    5796 main.go:141] libmachine: Creating SSH key...
	I1014 07:51:55.634791    5796 main.go:141] libmachine: Creating Disk image...
	I1014 07:51:55.634806    5796 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:51:55.635023    5796 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2
	I1014 07:51:55.644591    5796 main.go:141] libmachine: STDOUT: 
	I1014 07:51:55.644608    5796 main.go:141] libmachine: STDERR: 
	I1014 07:51:55.644678    5796 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2 +20000M
	I1014 07:51:55.653544    5796 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:51:55.653562    5796 main.go:141] libmachine: STDERR: 
	I1014 07:51:55.653589    5796 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2
	I1014 07:51:55.653593    5796 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:51:55.653606    5796 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:51:55.653645    5796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a1:0d:e3:e9:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2
	I1014 07:51:55.655741    5796 main.go:141] libmachine: STDOUT: 
	I1014 07:51:55.655755    5796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:51:55.655773    5796 client.go:171] duration metric: took 219.309125ms to LocalClient.Create
	I1014 07:51:55.878682    5796 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I1014 07:51:55.902637    5796 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1014 07:51:55.922483    5796 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1014 07:51:55.954062    5796 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I1014 07:51:56.050191    5796 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1014 07:51:56.055145    5796 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1014 07:51:56.055161    5796 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 644.850417ms
	I1014 07:51:56.055174    5796 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1014 07:51:56.084255    5796 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1014 07:51:56.171829    5796 cache.go:162] opening:  /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I1014 07:51:57.656015    5796 start.go:128] duration metric: took 2.245266333s to createHost
	I1014 07:51:57.656072    5796 start.go:83] releasing machines lock for "no-preload-029000", held for 2.245404333s
	W1014 07:51:57.656124    5796 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:57.673267    5796 out.go:177] * Deleting "no-preload-029000" in qemu2 ...
	W1014 07:51:57.702611    5796 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:51:57.702639    5796 start.go:729] Will try again in 5 seconds ...
	I1014 07:51:59.082082    5796 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1014 07:51:59.082155    5796 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.672011s
	I1014 07:51:59.082186    5796 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1014 07:51:59.590712    5796 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1014 07:51:59.590775    5796 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.180444583s
	I1014 07:51:59.590803    5796 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1014 07:51:59.774453    5796 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1014 07:51:59.774528    5796 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.364428583s
	I1014 07:51:59.774569    5796 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1014 07:51:59.919092    5796 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1014 07:51:59.919134    5796 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.509044834s
	I1014 07:51:59.919161    5796 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1014 07:52:00.112289    5796 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1014 07:52:00.112330    5796 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.702028792s
	I1014 07:52:00.112353    5796 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1014 07:52:02.704004    5796 start.go:360] acquireMachinesLock for no-preload-029000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:02.704315    5796 start.go:364] duration metric: took 263.5µs to acquireMachinesLock for "no-preload-029000"
	I1014 07:52:02.704385    5796 start.go:93] Provisioning new machine with config: &{Name:no-preload-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:52:02.704555    5796 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:52:02.720244    5796 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:52:02.756231    5796 start.go:159] libmachine.API.Create for "no-preload-029000" (driver="qemu2")
	I1014 07:52:02.756264    5796 client.go:168] LocalClient.Create starting
	I1014 07:52:02.756345    5796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:52:02.756389    5796 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:02.756401    5796 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:02.756436    5796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:52:02.756483    5796 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:02.756492    5796 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:02.756786    5796 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:52:02.919246    5796 main.go:141] libmachine: Creating SSH key...
	I1014 07:52:03.013701    5796 main.go:141] libmachine: Creating Disk image...
	I1014 07:52:03.013708    5796 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:52:03.013915    5796 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2
	I1014 07:52:03.024184    5796 main.go:141] libmachine: STDOUT: 
	I1014 07:52:03.024204    5796 main.go:141] libmachine: STDERR: 
	I1014 07:52:03.024264    5796 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2 +20000M
	I1014 07:52:03.033030    5796 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:52:03.033047    5796 main.go:141] libmachine: STDERR: 
	I1014 07:52:03.033057    5796 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2
	I1014 07:52:03.033063    5796 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:52:03.033073    5796 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:03.033107    5796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:39:50:59:c6:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2
	I1014 07:52:03.035050    5796 main.go:141] libmachine: STDOUT: 
	I1014 07:52:03.035066    5796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:03.035078    5796 client.go:171] duration metric: took 278.813709ms to LocalClient.Create
	I1014 07:52:03.512073    5796 cache.go:157] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1014 07:52:03.512130    5796 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.101868292s
	I1014 07:52:03.512153    5796 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1014 07:52:03.512191    5796 cache.go:87] Successfully saved all images to host disk.
	I1014 07:52:05.037232    5796 start.go:128] duration metric: took 2.332679333s to createHost
	I1014 07:52:05.037339    5796 start.go:83] releasing machines lock for "no-preload-029000", held for 2.332996125s
	W1014 07:52:05.037705    5796 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:05.048468    5796 out.go:201] 
	W1014 07:52:05.057443    5796 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:05.057471    5796 out.go:270] * 
	* 
	W1014 07:52:05.060165    5796 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:52:05.070505    5796 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-029000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000: exit status 7 (71.266167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-029000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-029000 create -f testdata/busybox.yaml: exit status 1 (28.664625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-029000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-029000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000: exit status 7 (33.978625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-029000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000: exit status 7 (34.114708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-029000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-029000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-029000 describe deploy/metrics-server -n kube-system: exit status 1 (27.055708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-029000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-029000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000: exit status 7 (34.571917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-029000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-029000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.192693167s)

                                                
                                                
-- stdout --
	* [no-preload-029000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-029000" primary control-plane node in "no-preload-029000" cluster
	* Restarting existing qemu2 VM for "no-preload-029000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-029000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:52:07.574743    5867 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:52:07.574898    5867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:07.574901    5867 out.go:358] Setting ErrFile to fd 2...
	I1014 07:52:07.574903    5867 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:07.575021    5867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:52:07.576076    5867 out.go:352] Setting JSON to false
	I1014 07:52:07.593553    5867 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4897,"bootTime":1728912630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:52:07.593633    5867 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:52:07.598417    5867 out.go:177] * [no-preload-029000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:52:07.605345    5867 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:52:07.605407    5867 notify.go:220] Checking for updates...
	I1014 07:52:07.613317    5867 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:52:07.617385    5867 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:52:07.621278    5867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:52:07.624357    5867 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:52:07.627342    5867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:52:07.630600    5867 config.go:182] Loaded profile config "no-preload-029000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:07.630875    5867 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:52:07.634270    5867 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:52:07.640315    5867 start.go:297] selected driver: qemu2
	I1014 07:52:07.640323    5867 start.go:901] validating driver "qemu2" against &{Name:no-preload-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:52:07.640382    5867 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:52:07.642949    5867 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:52:07.642986    5867 cni.go:84] Creating CNI manager for ""
	I1014 07:52:07.643018    5867 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:52:07.643047    5867 start.go:340] cluster config:
	{Name:no-preload-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-029000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:52:07.647558    5867 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:07.656301    5867 out.go:177] * Starting "no-preload-029000" primary control-plane node in "no-preload-029000" cluster
	I1014 07:52:07.660309    5867 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:52:07.660381    5867 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/no-preload-029000/config.json ...
	I1014 07:52:07.660387    5867 cache.go:107] acquiring lock: {Name:mkfbee7ed24a5bff77ccc82c9584e51a8ba123a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:07.660387    5867 cache.go:107] acquiring lock: {Name:mkeefcaf33444a55b79d7e408fdc59cbd0dc16ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:07.660407    5867 cache.go:107] acquiring lock: {Name:mk5066bddbc8c04953dc0a3dc8c600b5079a9148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:07.660480    5867 cache.go:115] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1014 07:52:07.660489    5867 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 104.875µs
	I1014 07:52:07.660495    5867 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1014 07:52:07.660501    5867 cache.go:107] acquiring lock: {Name:mk56ec1effa451602fda3f8c42c7bd585a870204 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:07.660507    5867 cache.go:115] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1014 07:52:07.660512    5867 cache.go:115] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1014 07:52:07.660515    5867 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 127.792µs
	I1014 07:52:07.660519    5867 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1014 07:52:07.660518    5867 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 139.333µs
	I1014 07:52:07.660527    5867 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1014 07:52:07.660567    5867 cache.go:115] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1014 07:52:07.660576    5867 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 75.875µs
	I1014 07:52:07.660579    5867 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1014 07:52:07.660575    5867 cache.go:107] acquiring lock: {Name:mk5c8607a7d79ea9ef91ef1406b01d7c0663f552 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:07.660604    5867 cache.go:107] acquiring lock: {Name:mkebde59ca6c1888e7bc7d15512f03f0354a68ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:07.660620    5867 cache.go:107] acquiring lock: {Name:mkdbd4614d7c4546dc814133e17ac19bea3d416d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:07.660632    5867 cache.go:107] acquiring lock: {Name:mke3ef47bf1bcfc097138c0ff14aa7ee697119a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:07.660668    5867 cache.go:115] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1014 07:52:07.660677    5867 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 170.791µs
	I1014 07:52:07.660681    5867 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1014 07:52:07.660685    5867 cache.go:115] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1014 07:52:07.660694    5867 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 186µs
	I1014 07:52:07.660704    5867 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1014 07:52:07.660712    5867 cache.go:115] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1014 07:52:07.660714    5867 cache.go:115] /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1014 07:52:07.660717    5867 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 195.458µs
	I1014 07:52:07.660719    5867 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 179.625µs
	I1014 07:52:07.660725    5867 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1014 07:52:07.660726    5867 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1014 07:52:07.660729    5867 cache.go:87] Successfully saved all images to host disk.
	I1014 07:52:07.660849    5867 start.go:360] acquireMachinesLock for no-preload-029000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:07.660897    5867 start.go:364] duration metric: took 42.75µs to acquireMachinesLock for "no-preload-029000"
	I1014 07:52:07.660907    5867 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:52:07.660911    5867 fix.go:54] fixHost starting: 
	I1014 07:52:07.661072    5867 fix.go:112] recreateIfNeeded on no-preload-029000: state=Stopped err=<nil>
	W1014 07:52:07.661080    5867 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:52:07.669285    5867 out.go:177] * Restarting existing qemu2 VM for "no-preload-029000" ...
	I1014 07:52:07.673295    5867 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:07.673333    5867 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:39:50:59:c6:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2
	I1014 07:52:07.675518    5867 main.go:141] libmachine: STDOUT: 
	I1014 07:52:07.675534    5867 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:07.675561    5867 fix.go:56] duration metric: took 14.647042ms for fixHost
	I1014 07:52:07.675565    5867 start.go:83] releasing machines lock for "no-preload-029000", held for 14.664125ms
	W1014 07:52:07.675572    5867 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:07.675608    5867 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:07.675612    5867 start.go:729] Will try again in 5 seconds ...
	I1014 07:52:12.677738    5867 start.go:360] acquireMachinesLock for no-preload-029000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:12.678141    5867 start.go:364] duration metric: took 321.5µs to acquireMachinesLock for "no-preload-029000"
	I1014 07:52:12.678262    5867 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:52:12.678281    5867 fix.go:54] fixHost starting: 
	I1014 07:52:12.678961    5867 fix.go:112] recreateIfNeeded on no-preload-029000: state=Stopped err=<nil>
	W1014 07:52:12.678993    5867 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:52:12.682367    5867 out.go:177] * Restarting existing qemu2 VM for "no-preload-029000" ...
	I1014 07:52:12.689433    5867 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:12.689664    5867 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:39:50:59:c6:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/no-preload-029000/disk.qcow2
	I1014 07:52:12.699300    5867 main.go:141] libmachine: STDOUT: 
	I1014 07:52:12.699351    5867 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:12.699424    5867 fix.go:56] duration metric: took 21.1435ms for fixHost
	I1014 07:52:12.699441    5867 start.go:83] releasing machines lock for "no-preload-029000", held for 21.278875ms
	W1014 07:52:12.699578    5867 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-029000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-029000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:12.707390    5867 out.go:201] 
	W1014 07:52:12.710464    5867 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:12.710511    5867 out.go:270] * 
	* 
	W1014 07:52:12.713276    5867 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:52:12.721371    5867 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-029000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000: exit status 7 (72.903792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-029000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000: exit status 7 (35.9115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-029000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-029000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-029000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.360791ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-029000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-029000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000: exit status 7 (34.085417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-029000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000: exit status 7 (34.215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-029000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-029000 --alsologtostderr -v=1: exit status 83 (45.353083ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-029000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-029000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:52:13.019014    5886 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:52:13.019208    5886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:13.019212    5886 out.go:358] Setting ErrFile to fd 2...
	I1014 07:52:13.019214    5886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:13.019348    5886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:52:13.019574    5886 out.go:352] Setting JSON to false
	I1014 07:52:13.019583    5886 mustload.go:65] Loading cluster: no-preload-029000
	I1014 07:52:13.019812    5886 config.go:182] Loaded profile config "no-preload-029000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:13.023970    5886 out.go:177] * The control-plane node no-preload-029000 host is not running: state=Stopped
	I1014 07:52:13.026997    5886 out.go:177]   To start a cluster, run: "minikube start -p no-preload-029000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-029000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000: exit status 7 (33.607667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-029000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000: exit status 7 (34.183541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-921000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-921000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.960488959s)

                                                
                                                
-- stdout --
	* [embed-certs-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-921000" primary control-plane node in "embed-certs-921000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-921000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:52:13.358688    5903 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:52:13.358862    5903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:13.358865    5903 out.go:358] Setting ErrFile to fd 2...
	I1014 07:52:13.358868    5903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:13.359009    5903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:52:13.360192    5903 out.go:352] Setting JSON to false
	I1014 07:52:13.377672    5903 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4903,"bootTime":1728912630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:52:13.377749    5903 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:52:13.383022    5903 out.go:177] * [embed-certs-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:52:13.389959    5903 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:52:13.390001    5903 notify.go:220] Checking for updates...
	I1014 07:52:13.396885    5903 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:52:13.399951    5903 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:52:13.402981    5903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:52:13.405949    5903 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:52:13.408919    5903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:52:13.412357    5903 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:13.412421    5903 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:13.412463    5903 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:52:13.416932    5903 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:52:13.423969    5903 start.go:297] selected driver: qemu2
	I1014 07:52:13.423979    5903 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:52:13.423987    5903 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:52:13.426527    5903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:52:13.429850    5903 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:52:13.433024    5903 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:52:13.433046    5903 cni.go:84] Creating CNI manager for ""
	I1014 07:52:13.433080    5903 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:52:13.433088    5903 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:52:13.433127    5903 start.go:340] cluster config:
	{Name:embed-certs-921000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:52:13.437758    5903 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:13.445938    5903 out.go:177] * Starting "embed-certs-921000" primary control-plane node in "embed-certs-921000" cluster
	I1014 07:52:13.449894    5903 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:52:13.449909    5903 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:52:13.449917    5903 cache.go:56] Caching tarball of preloaded images
	I1014 07:52:13.449996    5903 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:52:13.450002    5903 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:52:13.450067    5903 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/embed-certs-921000/config.json ...
	I1014 07:52:13.450079    5903 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/embed-certs-921000/config.json: {Name:mk2a0759802febf8e4acc73963a7c85c0ecc9da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:52:13.450479    5903 start.go:360] acquireMachinesLock for embed-certs-921000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:13.450537    5903 start.go:364] duration metric: took 49.791µs to acquireMachinesLock for "embed-certs-921000"
	I1014 07:52:13.450554    5903 start.go:93] Provisioning new machine with config: &{Name:embed-certs-921000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:52:13.450593    5903 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:52:13.458914    5903 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:52:13.476857    5903 start.go:159] libmachine.API.Create for "embed-certs-921000" (driver="qemu2")
	I1014 07:52:13.476893    5903 client.go:168] LocalClient.Create starting
	I1014 07:52:13.476963    5903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:52:13.477004    5903 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:13.477018    5903 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:13.477058    5903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:52:13.477088    5903 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:13.477100    5903 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:13.477511    5903 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:52:13.633274    5903 main.go:141] libmachine: Creating SSH key...
	I1014 07:52:13.730452    5903 main.go:141] libmachine: Creating Disk image...
	I1014 07:52:13.730458    5903 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:52:13.730647    5903 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2
	I1014 07:52:13.740649    5903 main.go:141] libmachine: STDOUT: 
	I1014 07:52:13.740670    5903 main.go:141] libmachine: STDERR: 
	I1014 07:52:13.740724    5903 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2 +20000M
	I1014 07:52:13.749306    5903 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:52:13.749320    5903 main.go:141] libmachine: STDERR: 
	I1014 07:52:13.749336    5903 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2
	I1014 07:52:13.749341    5903 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:52:13.749352    5903 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:13.749380    5903 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:16:db:da:23:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2
	I1014 07:52:13.751195    5903 main.go:141] libmachine: STDOUT: 
	I1014 07:52:13.751214    5903 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:13.751240    5903 client.go:171] duration metric: took 274.345958ms to LocalClient.Create
	I1014 07:52:15.753398    5903 start.go:128] duration metric: took 2.30281825s to createHost
	I1014 07:52:15.753462    5903 start.go:83] releasing machines lock for "embed-certs-921000", held for 2.302945333s
	W1014 07:52:15.753513    5903 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:15.768636    5903 out.go:177] * Deleting "embed-certs-921000" in qemu2 ...
	W1014 07:52:15.795380    5903 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:15.795416    5903 start.go:729] Will try again in 5 seconds ...
	I1014 07:52:20.797572    5903 start.go:360] acquireMachinesLock for embed-certs-921000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:20.798098    5903 start.go:364] duration metric: took 437.875µs to acquireMachinesLock for "embed-certs-921000"
	I1014 07:52:20.798223    5903 start.go:93] Provisioning new machine with config: &{Name:embed-certs-921000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:52:20.798530    5903 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:52:20.813263    5903 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:52:20.863905    5903 start.go:159] libmachine.API.Create for "embed-certs-921000" (driver="qemu2")
	I1014 07:52:20.863959    5903 client.go:168] LocalClient.Create starting
	I1014 07:52:20.864103    5903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:52:20.864183    5903 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:20.864200    5903 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:20.864280    5903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:52:20.864336    5903 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:20.864347    5903 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:20.864897    5903 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:52:21.033971    5903 main.go:141] libmachine: Creating SSH key...
	I1014 07:52:21.221398    5903 main.go:141] libmachine: Creating Disk image...
	I1014 07:52:21.221405    5903 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:52:21.221633    5903 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2
	I1014 07:52:21.232103    5903 main.go:141] libmachine: STDOUT: 
	I1014 07:52:21.232129    5903 main.go:141] libmachine: STDERR: 
	I1014 07:52:21.232192    5903 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2 +20000M
	I1014 07:52:21.240679    5903 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:52:21.240694    5903 main.go:141] libmachine: STDERR: 
	I1014 07:52:21.240715    5903 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2
	I1014 07:52:21.240723    5903 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:52:21.240734    5903 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:21.240760    5903 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:e2:c9:a2:59:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2
	I1014 07:52:21.242547    5903 main.go:141] libmachine: STDOUT: 
	I1014 07:52:21.242562    5903 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:21.242576    5903 client.go:171] duration metric: took 378.617041ms to LocalClient.Create
	I1014 07:52:23.244719    5903 start.go:128] duration metric: took 2.446166792s to createHost
	I1014 07:52:23.244823    5903 start.go:83] releasing machines lock for "embed-certs-921000", held for 2.446733166s
	W1014 07:52:23.245233    5903 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-921000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:23.258977    5903 out.go:201] 
	W1014 07:52:23.263080    5903 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:23.263106    5903 out.go:270] * 
	* 
	W1014 07:52:23.265951    5903 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:52:23.272896    5903 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-921000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000: exit status 7 (69.581083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-921000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-921000 create -f testdata/busybox.yaml: exit status 1 (29.849459ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-921000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-921000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000: exit status 7 (34.522833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-921000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000: exit status 7 (34.405584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-921000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-921000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-921000 describe deploy/metrics-server -n kube-system: exit status 1 (26.893084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-921000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-921000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000: exit status 7 (34.610459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-921000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-921000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.197716416s)

                                                
                                                
-- stdout --
	* [embed-certs-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-921000" primary control-plane node in "embed-certs-921000" cluster
	* Restarting existing qemu2 VM for "embed-certs-921000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-921000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:52:27.129915    5953 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:52:27.130066    5953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:27.130069    5953 out.go:358] Setting ErrFile to fd 2...
	I1014 07:52:27.130071    5953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:27.130219    5953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:52:27.131254    5953 out.go:352] Setting JSON to false
	I1014 07:52:27.148693    5953 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4917,"bootTime":1728912630,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:52:27.148767    5953 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:52:27.153757    5953 out.go:177] * [embed-certs-921000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:52:27.160605    5953 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:52:27.160658    5953 notify.go:220] Checking for updates...
	I1014 07:52:27.168704    5953 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:52:27.171698    5953 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:52:27.174720    5953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:52:27.177728    5953 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:52:27.179283    5953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:52:27.183069    5953 config.go:182] Loaded profile config "embed-certs-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:27.183349    5953 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:52:27.187705    5953 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:52:27.193689    5953 start.go:297] selected driver: qemu2
	I1014 07:52:27.193696    5953 start.go:901] validating driver "qemu2" against &{Name:embed-certs-921000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-921000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:52:27.193750    5953 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:52:27.196281    5953 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:52:27.196305    5953 cni.go:84] Creating CNI manager for ""
	I1014 07:52:27.196326    5953 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:52:27.196343    5953 start.go:340] cluster config:
	{Name:embed-certs-921000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-921000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:52:27.200938    5953 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:27.209683    5953 out.go:177] * Starting "embed-certs-921000" primary control-plane node in "embed-certs-921000" cluster
	I1014 07:52:27.213675    5953 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:52:27.213689    5953 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:52:27.213698    5953 cache.go:56] Caching tarball of preloaded images
	I1014 07:52:27.213769    5953 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:52:27.213775    5953 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:52:27.213845    5953 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/embed-certs-921000/config.json ...
	I1014 07:52:27.214331    5953 start.go:360] acquireMachinesLock for embed-certs-921000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:27.214361    5953 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "embed-certs-921000"
	I1014 07:52:27.214371    5953 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:52:27.214375    5953 fix.go:54] fixHost starting: 
	I1014 07:52:27.214500    5953 fix.go:112] recreateIfNeeded on embed-certs-921000: state=Stopped err=<nil>
	W1014 07:52:27.214508    5953 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:52:27.221711    5953 out.go:177] * Restarting existing qemu2 VM for "embed-certs-921000" ...
	I1014 07:52:27.225745    5953 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:27.225797    5953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:e2:c9:a2:59:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2
	I1014 07:52:27.228064    5953 main.go:141] libmachine: STDOUT: 
	I1014 07:52:27.228083    5953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:27.228112    5953 fix.go:56] duration metric: took 13.735375ms for fixHost
	I1014 07:52:27.228118    5953 start.go:83] releasing machines lock for "embed-certs-921000", held for 13.752208ms
	W1014 07:52:27.228123    5953 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:27.228168    5953 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:27.228172    5953 start.go:729] Will try again in 5 seconds ...
	I1014 07:52:32.229309    5953 start.go:360] acquireMachinesLock for embed-certs-921000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:32.229397    5953 start.go:364] duration metric: took 59.083µs to acquireMachinesLock for "embed-certs-921000"
	I1014 07:52:32.229417    5953 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:52:32.229421    5953 fix.go:54] fixHost starting: 
	I1014 07:52:32.229564    5953 fix.go:112] recreateIfNeeded on embed-certs-921000: state=Stopped err=<nil>
	W1014 07:52:32.229571    5953 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:52:32.244335    5953 out.go:177] * Restarting existing qemu2 VM for "embed-certs-921000" ...
	I1014 07:52:32.251283    5953 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:32.251357    5953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:e2:c9:a2:59:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/embed-certs-921000/disk.qcow2
	I1014 07:52:32.253554    5953 main.go:141] libmachine: STDOUT: 
	I1014 07:52:32.253571    5953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:32.253592    5953 fix.go:56] duration metric: took 24.1705ms for fixHost
	I1014 07:52:32.253599    5953 start.go:83] releasing machines lock for "embed-certs-921000", held for 24.190625ms
	W1014 07:52:32.253641    5953 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-921000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-921000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:32.263281    5953 out.go:201] 
	W1014 07:52:32.271310    5953 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:32.271319    5953 out.go:270] * 
	* 
	W1014 07:52:32.271902    5953 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:52:32.290288    5953 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-921000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000: exit status 7 (38.610292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-921000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000: exit status 7 (32.524125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-921000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-921000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-921000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.190916ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-921000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-921000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000: exit status 7 (33.40125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-921000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000: exit status 7 (33.601042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-921000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-921000 --alsologtostderr -v=1: exit status 83 (42.913209ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-921000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-921000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:52:32.536345    5979 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:52:32.536539    5979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:32.536542    5979 out.go:358] Setting ErrFile to fd 2...
	I1014 07:52:32.536545    5979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:32.536666    5979 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:52:32.536902    5979 out.go:352] Setting JSON to false
	I1014 07:52:32.536910    5979 mustload.go:65] Loading cluster: embed-certs-921000
	I1014 07:52:32.537127    5979 config.go:182] Loaded profile config "embed-certs-921000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:32.540424    5979 out.go:177] * The control-plane node embed-certs-921000 host is not running: state=Stopped
	I1014 07:52:32.544258    5979 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-921000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-921000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000: exit status 7 (32.980333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-921000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000: exit status 7 (32.610666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-921000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-328000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-328000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.951510292s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-328000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-328000" primary control-plane node in "default-k8s-diff-port-328000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-328000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:52:32.990334    6003 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:52:32.990471    6003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:32.990475    6003 out.go:358] Setting ErrFile to fd 2...
	I1014 07:52:32.990477    6003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:32.990612    6003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:52:32.991982    6003 out.go:352] Setting JSON to false
	I1014 07:52:33.009901    6003 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4922,"bootTime":1728912630,"procs":523,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:52:33.009964    6003 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:52:33.015291    6003 out.go:177] * [default-k8s-diff-port-328000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:52:33.022305    6003 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:52:33.022355    6003 notify.go:220] Checking for updates...
	I1014 07:52:33.030315    6003 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:52:33.034197    6003 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:52:33.037225    6003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:52:33.040255    6003 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:52:33.043257    6003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:52:33.051510    6003 config.go:182] Loaded profile config "cert-expiration-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:33.051577    6003 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:33.051629    6003 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:52:33.055293    6003 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:52:33.062225    6003 start.go:297] selected driver: qemu2
	I1014 07:52:33.062232    6003 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:52:33.062238    6003 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:52:33.064786    6003 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:52:33.067274    6003 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:52:33.070335    6003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:52:33.070362    6003 cni.go:84] Creating CNI manager for ""
	I1014 07:52:33.070390    6003 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:52:33.070403    6003 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:52:33.070447    6003 start.go:340] cluster config:
	{Name:default-k8s-diff-port-328000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:52:33.075352    6003 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:33.083253    6003 out.go:177] * Starting "default-k8s-diff-port-328000" primary control-plane node in "default-k8s-diff-port-328000" cluster
	I1014 07:52:33.087244    6003 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:52:33.087263    6003 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:52:33.087270    6003 cache.go:56] Caching tarball of preloaded images
	I1014 07:52:33.087347    6003 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:52:33.087353    6003 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:52:33.087413    6003 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/default-k8s-diff-port-328000/config.json ...
	I1014 07:52:33.087425    6003 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/default-k8s-diff-port-328000/config.json: {Name:mk4dfcbd417528dcef23b890385f32700efb112f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:52:33.087680    6003 start.go:360] acquireMachinesLock for default-k8s-diff-port-328000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:33.087736    6003 start.go:364] duration metric: took 46.5µs to acquireMachinesLock for "default-k8s-diff-port-328000"
	I1014 07:52:33.087753    6003 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:52:33.087785    6003 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:52:33.091364    6003 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:52:33.108196    6003 start.go:159] libmachine.API.Create for "default-k8s-diff-port-328000" (driver="qemu2")
	I1014 07:52:33.108223    6003 client.go:168] LocalClient.Create starting
	I1014 07:52:33.108298    6003 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:52:33.108342    6003 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:33.108351    6003 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:33.108393    6003 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:52:33.108426    6003 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:33.108431    6003 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:33.108832    6003 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:52:33.264801    6003 main.go:141] libmachine: Creating SSH key...
	I1014 07:52:33.427581    6003 main.go:141] libmachine: Creating Disk image...
	I1014 07:52:33.427589    6003 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:52:33.427791    6003 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2
	I1014 07:52:33.438147    6003 main.go:141] libmachine: STDOUT: 
	I1014 07:52:33.438169    6003 main.go:141] libmachine: STDERR: 
	I1014 07:52:33.438224    6003 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2 +20000M
	I1014 07:52:33.446614    6003 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:52:33.446628    6003 main.go:141] libmachine: STDERR: 
	I1014 07:52:33.446649    6003 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2
	I1014 07:52:33.446653    6003 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:52:33.446664    6003 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:33.446689    6003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:59:35:c4:e5:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2
	I1014 07:52:33.448396    6003 main.go:141] libmachine: STDOUT: 
	I1014 07:52:33.448413    6003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:33.448432    6003 client.go:171] duration metric: took 340.207ms to LocalClient.Create
	I1014 07:52:35.450580    6003 start.go:128] duration metric: took 2.362809083s to createHost
	I1014 07:52:35.450687    6003 start.go:83] releasing machines lock for "default-k8s-diff-port-328000", held for 2.362940167s
	W1014 07:52:35.450746    6003 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:35.461828    6003 out.go:177] * Deleting "default-k8s-diff-port-328000" in qemu2 ...
	W1014 07:52:35.492978    6003 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:35.493009    6003 start.go:729] Will try again in 5 seconds ...
	I1014 07:52:40.495151    6003 start.go:360] acquireMachinesLock for default-k8s-diff-port-328000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:40.495549    6003 start.go:364] duration metric: took 338.625µs to acquireMachinesLock for "default-k8s-diff-port-328000"
	I1014 07:52:40.495658    6003 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:52:40.495829    6003 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:52:40.503963    6003 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:52:40.549512    6003 start.go:159] libmachine.API.Create for "default-k8s-diff-port-328000" (driver="qemu2")
	I1014 07:52:40.549577    6003 client.go:168] LocalClient.Create starting
	I1014 07:52:40.549717    6003 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:52:40.549811    6003 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:40.549831    6003 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:40.549896    6003 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:52:40.549964    6003 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:40.549977    6003 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:40.551179    6003 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:52:40.728784    6003 main.go:141] libmachine: Creating SSH key...
	I1014 07:52:40.844627    6003 main.go:141] libmachine: Creating Disk image...
	I1014 07:52:40.844633    6003 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:52:40.844838    6003 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2
	I1014 07:52:40.854785    6003 main.go:141] libmachine: STDOUT: 
	I1014 07:52:40.854802    6003 main.go:141] libmachine: STDERR: 
	I1014 07:52:40.854858    6003 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2 +20000M
	I1014 07:52:40.863246    6003 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:52:40.863264    6003 main.go:141] libmachine: STDERR: 
	I1014 07:52:40.863277    6003 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2
	I1014 07:52:40.863282    6003 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:52:40.863290    6003 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:40.863316    6003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:9f:72:bc:ad:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2
	I1014 07:52:40.865100    6003 main.go:141] libmachine: STDOUT: 
	I1014 07:52:40.865113    6003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:40.865127    6003 client.go:171] duration metric: took 315.547083ms to LocalClient.Create
	I1014 07:52:42.867320    6003 start.go:128] duration metric: took 2.37148375s to createHost
	I1014 07:52:42.867426    6003 start.go:83] releasing machines lock for "default-k8s-diff-port-328000", held for 2.3718865s
	W1014 07:52:42.867919    6003 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-328000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-328000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:42.876753    6003 out.go:201] 
	W1014 07:52:42.882753    6003 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:42.882813    6003 out.go:270] * 
	* 
	W1014 07:52:42.885230    6003 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:52:42.895672    6003 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-328000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000: exit status 7 (71.831792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-831000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-831000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.841776542s)

                                                
                                                
-- stdout --
	* [newest-cni-831000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-831000" primary control-plane node in "newest-cni-831000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:52:37.534758    6019 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:52:37.534916    6019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:37.534919    6019 out.go:358] Setting ErrFile to fd 2...
	I1014 07:52:37.534921    6019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:37.535054    6019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:52:37.536248    6019 out.go:352] Setting JSON to false
	I1014 07:52:37.553805    6019 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4927,"bootTime":1728912630,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:52:37.553873    6019 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:52:37.558888    6019 out.go:177] * [newest-cni-831000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:52:37.566910    6019 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:52:37.566959    6019 notify.go:220] Checking for updates...
	I1014 07:52:37.573834    6019 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:52:37.576850    6019 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:52:37.579819    6019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:52:37.582831    6019 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:52:37.585843    6019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:52:37.587589    6019 config.go:182] Loaded profile config "default-k8s-diff-port-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:37.587664    6019 config.go:182] Loaded profile config "multinode-613000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:37.587718    6019 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:52:37.591836    6019 out.go:177] * Using the qemu2 driver based on user configuration
	I1014 07:52:37.598665    6019 start.go:297] selected driver: qemu2
	I1014 07:52:37.598671    6019 start.go:901] validating driver "qemu2" against <nil>
	I1014 07:52:37.598677    6019 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:52:37.601244    6019 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1014 07:52:37.601288    6019 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1014 07:52:37.608835    6019 out.go:177] * Automatically selected the socket_vmnet network
	I1014 07:52:37.610404    6019 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 07:52:37.610428    6019 cni.go:84] Creating CNI manager for ""
	I1014 07:52:37.610452    6019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:52:37.610456    6019 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 07:52:37.610480    6019 start.go:340] cluster config:
	{Name:newest-cni-831000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-831000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:52:37.615249    6019 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:37.623879    6019 out.go:177] * Starting "newest-cni-831000" primary control-plane node in "newest-cni-831000" cluster
	I1014 07:52:37.627817    6019 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:52:37.627836    6019 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:52:37.627848    6019 cache.go:56] Caching tarball of preloaded images
	I1014 07:52:37.627943    6019 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:52:37.627949    6019 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:52:37.628018    6019 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/newest-cni-831000/config.json ...
	I1014 07:52:37.628029    6019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/newest-cni-831000/config.json: {Name:mk114c690cdd5fad77e0077a3f622069a02e71b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:52:37.628302    6019 start.go:360] acquireMachinesLock for newest-cni-831000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:37.628354    6019 start.go:364] duration metric: took 45µs to acquireMachinesLock for "newest-cni-831000"
	I1014 07:52:37.628368    6019 start.go:93] Provisioning new machine with config: &{Name:newest-cni-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-831000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:52:37.628406    6019 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:52:37.631848    6019 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:52:37.650003    6019 start.go:159] libmachine.API.Create for "newest-cni-831000" (driver="qemu2")
	I1014 07:52:37.650030    6019 client.go:168] LocalClient.Create starting
	I1014 07:52:37.650112    6019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:52:37.650150    6019 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:37.650159    6019 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:37.650206    6019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:52:37.650238    6019 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:37.650246    6019 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:37.650640    6019 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:52:37.806462    6019 main.go:141] libmachine: Creating SSH key...
	I1014 07:52:37.848728    6019 main.go:141] libmachine: Creating Disk image...
	I1014 07:52:37.848734    6019 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:52:37.848948    6019 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2
	I1014 07:52:37.858868    6019 main.go:141] libmachine: STDOUT: 
	I1014 07:52:37.858888    6019 main.go:141] libmachine: STDERR: 
	I1014 07:52:37.858955    6019 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2 +20000M
	I1014 07:52:37.867420    6019 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:52:37.867438    6019 main.go:141] libmachine: STDERR: 
	I1014 07:52:37.867450    6019 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2
	I1014 07:52:37.867457    6019 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:52:37.867468    6019 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:37.867497    6019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:47:a3:37:67:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2
	I1014 07:52:37.869386    6019 main.go:141] libmachine: STDOUT: 
	I1014 07:52:37.869400    6019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:37.869423    6019 client.go:171] duration metric: took 219.387ms to LocalClient.Create
	I1014 07:52:39.871578    6019 start.go:128] duration metric: took 2.243180625s to createHost
	I1014 07:52:39.871657    6019 start.go:83] releasing machines lock for "newest-cni-831000", held for 2.243322416s
	W1014 07:52:39.871754    6019 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:39.882792    6019 out.go:177] * Deleting "newest-cni-831000" in qemu2 ...
	W1014 07:52:39.911253    6019 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:39.911277    6019 start.go:729] Will try again in 5 seconds ...
	I1014 07:52:44.913365    6019 start.go:360] acquireMachinesLock for newest-cni-831000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:44.913819    6019 start.go:364] duration metric: took 346.708µs to acquireMachinesLock for "newest-cni-831000"
	I1014 07:52:44.913883    6019 start.go:93] Provisioning new machine with config: &{Name:newest-cni-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-831000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:52:44.914080    6019 start.go:125] createHost starting for "" (driver="qemu2")
	I1014 07:52:44.923931    6019 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:52:44.968844    6019 start.go:159] libmachine.API.Create for "newest-cni-831000" (driver="qemu2")
	I1014 07:52:44.968902    6019 client.go:168] LocalClient.Create starting
	I1014 07:52:44.969054    6019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/ca.pem
	I1014 07:52:44.969107    6019 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:44.969123    6019 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:44.969191    6019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19790-979/.minikube/certs/cert.pem
	I1014 07:52:44.969222    6019 main.go:141] libmachine: Decoding PEM data...
	I1014 07:52:44.969240    6019 main.go:141] libmachine: Parsing certificate...
	I1014 07:52:44.969809    6019 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19790-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1014 07:52:45.138647    6019 main.go:141] libmachine: Creating SSH key...
	I1014 07:52:45.275500    6019 main.go:141] libmachine: Creating Disk image...
	I1014 07:52:45.275508    6019 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1014 07:52:45.275762    6019 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2.raw /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2
	I1014 07:52:45.285639    6019 main.go:141] libmachine: STDOUT: 
	I1014 07:52:45.285655    6019 main.go:141] libmachine: STDERR: 
	I1014 07:52:45.285723    6019 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2 +20000M
	I1014 07:52:45.294263    6019 main.go:141] libmachine: STDOUT: Image resized.
	
	I1014 07:52:45.294284    6019 main.go:141] libmachine: STDERR: 
	I1014 07:52:45.294301    6019 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2
	I1014 07:52:45.294307    6019 main.go:141] libmachine: Starting QEMU VM...
	I1014 07:52:45.294318    6019 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:45.294357    6019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:29:c2:41:42:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2
	I1014 07:52:45.296220    6019 main.go:141] libmachine: STDOUT: 
	I1014 07:52:45.296235    6019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:45.296252    6019 client.go:171] duration metric: took 327.347542ms to LocalClient.Create
	I1014 07:52:47.298438    6019 start.go:128] duration metric: took 2.38434875s to createHost
	I1014 07:52:47.298529    6019 start.go:83] releasing machines lock for "newest-cni-831000", held for 2.384716666s
	W1014 07:52:47.298923    6019 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:47.310454    6019 out.go:201] 
	W1014 07:52:47.317585    6019 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:47.317614    6019 out.go:270] * 
	* 
	W1014 07:52:47.319908    6019 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:52:47.331513    6019 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-831000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000: exit status 7 (69.270916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-328000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-328000 create -f testdata/busybox.yaml: exit status 1 (28.948291ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-328000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-328000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000: exit status 7 (33.739416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-328000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000: exit status 7 (32.576333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-328000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-328000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-328000 describe deploy/metrics-server -n kube-system: exit status 1 (27.720917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-328000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-328000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000: exit status 7 (32.502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-328000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-328000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.908100625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-328000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-328000" primary control-plane node in "default-k8s-diff-port-328000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-328000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-328000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:52:46.521623    6075 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:52:46.521770    6075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:46.521774    6075 out.go:358] Setting ErrFile to fd 2...
	I1014 07:52:46.521776    6075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:46.521894    6075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:52:46.522958    6075 out.go:352] Setting JSON to false
	I1014 07:52:46.540532    6075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4936,"bootTime":1728912630,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:52:46.540606    6075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:52:46.544304    6075 out.go:177] * [default-k8s-diff-port-328000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:52:46.551327    6075 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:52:46.551378    6075 notify.go:220] Checking for updates...
	I1014 07:52:46.559246    6075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:52:46.563269    6075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:52:46.566196    6075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:52:46.569306    6075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:52:46.572281    6075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:52:46.575543    6075 config.go:182] Loaded profile config "default-k8s-diff-port-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:46.575829    6075 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:52:46.579194    6075 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:52:46.586177    6075 start.go:297] selected driver: qemu2
	I1014 07:52:46.586187    6075 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-328000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:52:46.586265    6075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:52:46.588836    6075 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:52:46.588860    6075 cni.go:84] Creating CNI manager for ""
	I1014 07:52:46.588880    6075 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:52:46.588904    6075 start.go:340] cluster config:
	{Name:default-k8s-diff-port-328000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-328000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:52:46.593408    6075 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:46.600182    6075 out.go:177] * Starting "default-k8s-diff-port-328000" primary control-plane node in "default-k8s-diff-port-328000" cluster
	I1014 07:52:46.603317    6075 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:52:46.603332    6075 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:52:46.603346    6075 cache.go:56] Caching tarball of preloaded images
	I1014 07:52:46.603428    6075 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:52:46.603434    6075 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:52:46.603499    6075 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/default-k8s-diff-port-328000/config.json ...
	I1014 07:52:46.604003    6075 start.go:360] acquireMachinesLock for default-k8s-diff-port-328000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:47.298674    6075 start.go:364] duration metric: took 694.646375ms to acquireMachinesLock for "default-k8s-diff-port-328000"
	I1014 07:52:47.298836    6075 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:52:47.298860    6075 fix.go:54] fixHost starting: 
	I1014 07:52:47.299558    6075 fix.go:112] recreateIfNeeded on default-k8s-diff-port-328000: state=Stopped err=<nil>
	W1014 07:52:47.299594    6075 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:52:47.314456    6075 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-328000" ...
	I1014 07:52:47.321511    6075 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:47.321716    6075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:9f:72:bc:ad:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2
	I1014 07:52:47.332792    6075 main.go:141] libmachine: STDOUT: 
	I1014 07:52:47.332886    6075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:47.333033    6075 fix.go:56] duration metric: took 34.158542ms for fixHost
	I1014 07:52:47.333054    6075 start.go:83] releasing machines lock for "default-k8s-diff-port-328000", held for 34.343792ms
	W1014 07:52:47.333080    6075 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:47.333240    6075 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:47.333256    6075 start.go:729] Will try again in 5 seconds ...
	I1014 07:52:52.335472    6075 start.go:360] acquireMachinesLock for default-k8s-diff-port-328000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:52.335889    6075 start.go:364] duration metric: took 309.958µs to acquireMachinesLock for "default-k8s-diff-port-328000"
	I1014 07:52:52.336006    6075 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:52:52.336025    6075 fix.go:54] fixHost starting: 
	I1014 07:52:52.336773    6075 fix.go:112] recreateIfNeeded on default-k8s-diff-port-328000: state=Stopped err=<nil>
	W1014 07:52:52.336803    6075 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:52:52.346271    6075 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-328000" ...
	I1014 07:52:52.350301    6075 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:52.350559    6075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:9f:72:bc:ad:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/default-k8s-diff-port-328000/disk.qcow2
	I1014 07:52:52.360240    6075 main.go:141] libmachine: STDOUT: 
	I1014 07:52:52.360330    6075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:52.360403    6075 fix.go:56] duration metric: took 24.379542ms for fixHost
	I1014 07:52:52.360420    6075 start.go:83] releasing machines lock for "default-k8s-diff-port-328000", held for 24.510459ms
	W1014 07:52:52.360644    6075 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-328000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-328000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:52.370294    6075 out.go:201] 
	W1014 07:52:52.374378    6075 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:52.374403    6075 out.go:270] * 
	* 
	W1014 07:52:52.376932    6075 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:52:52.385375    6075 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-328000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000: exit status 7 (70.413167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-831000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-831000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.200644416s)

                                                
                                                
-- stdout --
	* [newest-cni-831000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-831000" primary control-plane node in "newest-cni-831000" cluster
	* Restarting existing qemu2 VM for "newest-cni-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:52:51.085549    6108 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:52:51.085694    6108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:51.085697    6108 out.go:358] Setting ErrFile to fd 2...
	I1014 07:52:51.085699    6108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:51.085836    6108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:52:51.086869    6108 out.go:352] Setting JSON to false
	I1014 07:52:51.104406    6108 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4941,"bootTime":1728912630,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 07:52:51.104473    6108 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:52:51.109901    6108 out.go:177] * [newest-cni-831000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 07:52:51.116843    6108 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:52:51.116907    6108 notify.go:220] Checking for updates...
	I1014 07:52:51.124791    6108 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 07:52:51.127841    6108 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 07:52:51.130850    6108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:52:51.133838    6108 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 07:52:51.136875    6108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:52:51.140169    6108 config.go:182] Loaded profile config "newest-cni-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:51.140438    6108 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:52:51.143832    6108 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 07:52:51.150878    6108 start.go:297] selected driver: qemu2
	I1014 07:52:51.150888    6108 start.go:901] validating driver "qemu2" against &{Name:newest-cni-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-831000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:52:51.150945    6108 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:52:51.153518    6108 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 07:52:51.153544    6108 cni.go:84] Creating CNI manager for ""
	I1014 07:52:51.153565    6108 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:52:51.153589    6108 start.go:340] cluster config:
	{Name:newest-cni-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-831000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:52:51.158066    6108 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:52:51.165796    6108 out.go:177] * Starting "newest-cni-831000" primary control-plane node in "newest-cni-831000" cluster
	I1014 07:52:51.168768    6108 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:52:51.168781    6108 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 07:52:51.168789    6108 cache.go:56] Caching tarball of preloaded images
	I1014 07:52:51.168839    6108 preload.go:172] Found /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 07:52:51.168844    6108 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:52:51.168895    6108 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/newest-cni-831000/config.json ...
	I1014 07:52:51.169377    6108 start.go:360] acquireMachinesLock for newest-cni-831000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:51.169406    6108 start.go:364] duration metric: took 23.125µs to acquireMachinesLock for "newest-cni-831000"
	I1014 07:52:51.169416    6108 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:52:51.169421    6108 fix.go:54] fixHost starting: 
	I1014 07:52:51.169539    6108 fix.go:112] recreateIfNeeded on newest-cni-831000: state=Stopped err=<nil>
	W1014 07:52:51.169546    6108 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:52:51.173871    6108 out.go:177] * Restarting existing qemu2 VM for "newest-cni-831000" ...
	I1014 07:52:51.181799    6108 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:51.181831    6108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:29:c2:41:42:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2
	I1014 07:52:51.183951    6108 main.go:141] libmachine: STDOUT: 
	I1014 07:52:51.183969    6108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:51.183999    6108 fix.go:56] duration metric: took 14.576584ms for fixHost
	I1014 07:52:51.184018    6108 start.go:83] releasing machines lock for "newest-cni-831000", held for 14.593ms
	W1014 07:52:51.184024    6108 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:51.184057    6108 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:51.184061    6108 start.go:729] Will try again in 5 seconds ...
	I1014 07:52:56.186174    6108 start.go:360] acquireMachinesLock for newest-cni-831000: {Name:mk09116d9c1e5913f75d61ccf337e6d00c4c712f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:52:56.186703    6108 start.go:364] duration metric: took 429.084µs to acquireMachinesLock for "newest-cni-831000"
	I1014 07:52:56.186840    6108 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:52:56.186863    6108 fix.go:54] fixHost starting: 
	I1014 07:52:56.187644    6108 fix.go:112] recreateIfNeeded on newest-cni-831000: state=Stopped err=<nil>
	W1014 07:52:56.187669    6108 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:52:56.194984    6108 out.go:177] * Restarting existing qemu2 VM for "newest-cni-831000" ...
	I1014 07:52:56.199042    6108 qemu.go:418] Using hvf for hardware acceleration
	I1014 07:52:56.199268    6108 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:29:c2:41:42:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19790-979/.minikube/machines/newest-cni-831000/disk.qcow2
	I1014 07:52:56.210086    6108 main.go:141] libmachine: STDOUT: 
	I1014 07:52:56.210138    6108 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1014 07:52:56.210244    6108 fix.go:56] duration metric: took 23.383ms for fixHost
	I1014 07:52:56.210267    6108 start.go:83] releasing machines lock for "newest-cni-831000", held for 23.54ms
	W1014 07:52:56.210436    6108 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-831000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-831000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1014 07:52:56.219021    6108 out.go:201] 
	W1014 07:52:56.229790    6108 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1014 07:52:56.229812    6108 out.go:270] * 
	* 
	W1014 07:52:56.232341    6108 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:52:56.245974    6108 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-831000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000: exit status 7 (74.701875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-328000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000: exit status 7 (34.852416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-328000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-328000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-328000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.281ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-328000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-328000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000: exit status 7 (32.863292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-328000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000: exit status 7 (32.689583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-328000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-328000 --alsologtostderr -v=1: exit status 83 (43.921167ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-328000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-328000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:52:52.673001    6127 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:52:52.673195    6127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:52.673199    6127 out.go:358] Setting ErrFile to fd 2...
	I1014 07:52:52.673201    6127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:52.673337    6127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:52:52.673563    6127 out.go:352] Setting JSON to false
	I1014 07:52:52.673571    6127 mustload.go:65] Loading cluster: default-k8s-diff-port-328000
	I1014 07:52:52.673797    6127 config.go:182] Loaded profile config "default-k8s-diff-port-328000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:52.678291    6127 out.go:177] * The control-plane node default-k8s-diff-port-328000 host is not running: state=Stopped
	I1014 07:52:52.682282    6127 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-328000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-328000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000: exit status 7 (32.851666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-328000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000: exit status 7 (32.93475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-328000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-831000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000: exit status 7 (33.68ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-831000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-831000 --alsologtostderr -v=1: exit status 83 (45.831583ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-831000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-831000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:52:56.443980    6151 out.go:345] Setting OutFile to fd 1 ...
	I1014 07:52:56.444178    6151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:56.444181    6151 out.go:358] Setting ErrFile to fd 2...
	I1014 07:52:56.444184    6151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:52:56.444302    6151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 07:52:56.444531    6151 out.go:352] Setting JSON to false
	I1014 07:52:56.444538    6151 mustload.go:65] Loading cluster: newest-cni-831000
	I1014 07:52:56.444757    6151 config.go:182] Loaded profile config "newest-cni-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:52:56.448643    6151 out.go:177] * The control-plane node newest-cni-831000 host is not running: state=Stopped
	I1014 07:52:56.452634    6151 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-831000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-831000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000: exit status 7 (34.534709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-831000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000: exit status 7 (33.8525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-831000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (152/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.11
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 10.68
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 200.75
29 TestAddons/serial/Volcano 39.1
31 TestAddons/serial/GCPAuth/Namespaces 0.09
32 TestAddons/serial/GCPAuth/PullSecret 10.42
34 TestAddons/parallel/Registry 15.39
35 TestAddons/parallel/Ingress 18.83
36 TestAddons/parallel/InspektorGadget 10.26
37 TestAddons/parallel/MetricsServer 5.28
39 TestAddons/parallel/CSI 31.06
40 TestAddons/parallel/Headlamp 16.59
41 TestAddons/parallel/CloudSpanner 6.2
42 TestAddons/parallel/LocalPath 52.87
43 TestAddons/parallel/NvidiaDevicePlugin 5.19
44 TestAddons/parallel/Yakd 11.29
46 TestAddons/StoppedEnableDisable 12.43
54 TestHyperKitDriverInstallOrUpdate 11.5
57 TestErrorSpam/setup 36.3
58 TestErrorSpam/start 0.36
59 TestErrorSpam/status 0.25
60 TestErrorSpam/pause 0.63
61 TestErrorSpam/unpause 0.58
62 TestErrorSpam/stop 55.29
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 48.21
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 38.89
69 TestFunctional/serial/KubeContext 0.03
70 TestFunctional/serial/KubectlGetPods 0.05
73 TestFunctional/serial/CacheCmd/cache/add_remote 3.14
74 TestFunctional/serial/CacheCmd/cache/add_local 1.68
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
76 TestFunctional/serial/CacheCmd/cache/list 0.04
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
78 TestFunctional/serial/CacheCmd/cache/cache_reload 0.72
79 TestFunctional/serial/CacheCmd/cache/delete 0.08
80 TestFunctional/serial/MinikubeKubectlCmd 2.18
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.16
82 TestFunctional/serial/ExtraConfig 38.87
83 TestFunctional/serial/ComponentHealth 0.04
84 TestFunctional/serial/LogsCmd 0.65
85 TestFunctional/serial/LogsFileCmd 0.64
86 TestFunctional/serial/InvalidService 3.82
88 TestFunctional/parallel/ConfigCmd 0.25
89 TestFunctional/parallel/DashboardCmd 8.38
90 TestFunctional/parallel/DryRun 0.28
91 TestFunctional/parallel/InternationalLanguage 0.12
92 TestFunctional/parallel/StatusCmd 0.24
97 TestFunctional/parallel/AddonsCmd 0.11
98 TestFunctional/parallel/PersistentVolumeClaim 26.92
100 TestFunctional/parallel/SSHCmd 0.13
101 TestFunctional/parallel/CpCmd 0.43
103 TestFunctional/parallel/FileSync 0.07
104 TestFunctional/parallel/CertSync 0.45
108 TestFunctional/parallel/NodeLabels 0.04
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.14
112 TestFunctional/parallel/License 0.36
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.32
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
124 TestFunctional/parallel/ServiceCmd/DeployApp 6.1
125 TestFunctional/parallel/ServiceCmd/List 0.32
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.12
128 TestFunctional/parallel/ServiceCmd/Format 0.1
129 TestFunctional/parallel/ServiceCmd/URL 0.1
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.15
131 TestFunctional/parallel/ProfileCmd/profile_list 0.14
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
133 TestFunctional/parallel/MountCmd/any-port 5.96
134 TestFunctional/parallel/MountCmd/specific-port 0.97
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.42
136 TestFunctional/parallel/Version/short 0.05
137 TestFunctional/parallel/Version/components 0.17
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
142 TestFunctional/parallel/ImageCommands/ImageBuild 1.9
143 TestFunctional/parallel/ImageCommands/Setup 1.71
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.56
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.58
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.18
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.22
151 TestFunctional/parallel/DockerEnv/bash 0.32
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.01
167 TestMultiControlPlane/serial/CopyFile 0.04
175 TestImageBuild/serial/Setup 34.57
176 TestImageBuild/serial/NormalBuild 1.8
177 TestImageBuild/serial/BuildWithBuildArg 0.64
178 TestImageBuild/serial/BuildWithDockerIgnore 0.52
179 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.5
184 TestJSONOutput/start/Audit 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 4.9
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.04
212 TestMinikubeProfile 70.74
256 TestStoppedBinaryUpgrade/Setup 3.38
267 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
269 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
274 TestNoKubernetes/serial/ProfileList 15.7
275 TestNoKubernetes/serial/Stop 2.01
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
293 TestStartStop/group/old-k8s-version/serial/Stop 3.58
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
304 TestStartStop/group/no-preload/serial/Stop 2.03
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
315 TestStartStop/group/embed-certs/serial/Stop 3.38
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.16
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
333 TestStartStop/group/newest-cni/serial/Stop 3.45
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1014 06:38:24.508540    1497 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1014 06:38:24.508968    1497 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-306000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-306000: exit status 85 (104.724375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-306000 | jenkins | v1.34.0 | 14 Oct 24 06:37 PDT |          |
	|         | -p download-only-306000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 06:37:57
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 06:37:57.812424    1498 out.go:345] Setting OutFile to fd 1 ...
	I1014 06:37:57.812602    1498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:37:57.812605    1498 out.go:358] Setting ErrFile to fd 2...
	I1014 06:37:57.812607    1498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:37:57.812743    1498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	W1014 06:37:57.812843    1498 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19790-979/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19790-979/.minikube/config/config.json: no such file or directory
	I1014 06:37:57.814301    1498 out.go:352] Setting JSON to true
	I1014 06:37:57.834286    1498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":447,"bootTime":1728912630,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 06:37:57.834360    1498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 06:37:57.839947    1498 out.go:97] [download-only-306000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 06:37:57.840097    1498 notify.go:220] Checking for updates...
	W1014 06:37:57.840115    1498 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball: no such file or directory
	I1014 06:37:57.842909    1498 out.go:169] MINIKUBE_LOCATION=19790
	I1014 06:37:57.845893    1498 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 06:37:57.850942    1498 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 06:37:57.853975    1498 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 06:37:57.857447    1498 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	W1014 06:37:57.863933    1498 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 06:37:57.864137    1498 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 06:37:57.868723    1498 out.go:97] Using the qemu2 driver based on user configuration
	I1014 06:37:57.868741    1498 start.go:297] selected driver: qemu2
	I1014 06:37:57.868755    1498 start.go:901] validating driver "qemu2" against <nil>
	I1014 06:37:57.868818    1498 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 06:37:57.871916    1498 out.go:169] Automatically selected the socket_vmnet network
	I1014 06:37:57.877882    1498 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1014 06:37:57.877960    1498 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 06:37:57.878002    1498 cni.go:84] Creating CNI manager for ""
	I1014 06:37:57.878045    1498 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1014 06:37:57.878107    1498 start.go:340] cluster config:
	{Name:download-only-306000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 06:37:57.882902    1498 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 06:37:57.886993    1498 out.go:97] Downloading VM boot image ...
	I1014 06:37:57.887014    1498 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso
	I1014 06:38:10.953527    1498 out.go:97] Starting "download-only-306000" primary control-plane node in "download-only-306000" cluster
	I1014 06:38:10.953559    1498 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 06:38:11.013188    1498 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1014 06:38:11.013213    1498 cache.go:56] Caching tarball of preloaded images
	I1014 06:38:11.013437    1498 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 06:38:11.018143    1498 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1014 06:38:11.018150    1498 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1014 06:38:11.109558    1498 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1014 06:38:23.202758    1498 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1014 06:38:23.203357    1498 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1014 06:38:23.898846    1498 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1014 06:38:23.899049    1498 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/download-only-306000/config.json ...
	I1014 06:38:23.899073    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/download-only-306000/config.json: {Name:mk73d5bf07ad3f3c2a9d2c1a30a6647fa5a1dc82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 06:38:23.899345    1498 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 06:38:23.899585    1498 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1014 06:38:24.458344    1498 out.go:193] 
	W1014 06:38:24.464363    1498 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19790-979/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080 0x1080e1080] Decompressors:map[bz2:0x140006fc8c0 gz:0x140006fc8c8 tar:0x140006fc800 tar.bz2:0x140006fc810 tar.gz:0x140006fc850 tar.xz:0x140006fc860 tar.zst:0x140006fc870 tbz2:0x140006fc810 tgz:0x140006fc850 txz:0x140006fc860 tzst:0x140006fc870 xz:0x140006fc8d0 zip:0x140006fc8e0 zst:0x140006fc8d8] Getters:map[file:0x1400060f5e0 http:0x140008e00f0 https:0x140008e0140] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1014 06:38:24.464398    1498 out_reason.go:110] 
	W1014 06:38:24.472311    1498 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 06:38:24.475258    1498 out.go:193] 
	
	
	* The control-plane node download-only-306000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-306000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-306000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (10.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-719000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-719000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (10.680311458s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (10.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1014 06:38:35.574187    1497 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1014 06:38:35.574250    1497 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-719000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-719000: exit status 85 (77.600667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-306000 | jenkins | v1.34.0 | 14 Oct 24 06:37 PDT |                     |
	|         | -p download-only-306000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 14 Oct 24 06:38 PDT | 14 Oct 24 06:38 PDT |
	| delete  | -p download-only-306000        | download-only-306000 | jenkins | v1.34.0 | 14 Oct 24 06:38 PDT | 14 Oct 24 06:38 PDT |
	| start   | -o=json --download-only        | download-only-719000 | jenkins | v1.34.0 | 14 Oct 24 06:38 PDT |                     |
	|         | -p download-only-719000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 06:38:24
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 06:38:24.924811    1534 out.go:345] Setting OutFile to fd 1 ...
	I1014 06:38:24.924956    1534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:38:24.924959    1534 out.go:358] Setting ErrFile to fd 2...
	I1014 06:38:24.924962    1534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:38:24.925093    1534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 06:38:24.926268    1534 out.go:352] Setting JSON to true
	I1014 06:38:24.943921    1534 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":474,"bootTime":1728912630,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 06:38:24.943996    1534 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 06:38:24.948811    1534 out.go:97] [download-only-719000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 06:38:24.948931    1534 notify.go:220] Checking for updates...
	I1014 06:38:24.952755    1534 out.go:169] MINIKUBE_LOCATION=19790
	I1014 06:38:24.955765    1534 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 06:38:24.959828    1534 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 06:38:24.962729    1534 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 06:38:24.965768    1534 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	W1014 06:38:24.971776    1534 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 06:38:24.971973    1534 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 06:38:24.974763    1534 out.go:97] Using the qemu2 driver based on user configuration
	I1014 06:38:24.974773    1534 start.go:297] selected driver: qemu2
	I1014 06:38:24.974778    1534 start.go:901] validating driver "qemu2" against <nil>
	I1014 06:38:24.974830    1534 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 06:38:24.977791    1534 out.go:169] Automatically selected the socket_vmnet network
	I1014 06:38:24.983411    1534 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1014 06:38:24.983519    1534 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 06:38:24.983537    1534 cni.go:84] Creating CNI manager for ""
	I1014 06:38:24.983561    1534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 06:38:24.983566    1534 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 06:38:24.983613    1534 start.go:340] cluster config:
	{Name:download-only-719000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-719000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 06:38:24.987972    1534 iso.go:125] acquiring lock: {Name:mkd29166e1dc246803bc0e7f81b11fbff2fbf147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 06:38:24.990781    1534 out.go:97] Starting "download-only-719000" primary control-plane node in "download-only-719000" cluster
	I1014 06:38:24.990787    1534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 06:38:25.056234    1534 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 06:38:25.056253    1534 cache.go:56] Caching tarball of preloaded images
	I1014 06:38:25.056450    1534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 06:38:25.060677    1534 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1014 06:38:25.060684    1534 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1014 06:38:25.153496    1534 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1014 06:38:33.385991    1534 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1014 06:38:33.386163    1534 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19790-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1014 06:38:33.907503    1534 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 06:38:33.907683    1534 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/download-only-719000/config.json ...
	I1014 06:38:33.907701    1534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/download-only-719000/config.json: {Name:mke581d688c2c1d04e3ace6f686e4b1b198a49a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 06:38:33.907996    1534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 06:38:33.908154    1534 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19790-979/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-719000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-719000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-719000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-943000
addons_test.go:935: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-943000: exit status 85 (65.139042ms)

                                                
                                                
-- stdout --
	* Profile "addons-943000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-943000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-943000
addons_test.go:946: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-943000: exit status 85 (61.145458ms)

                                                
                                                
-- stdout --
	* Profile "addons-943000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-943000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (200.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-943000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-943000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m20.751898541s)
--- PASS: TestAddons/Setup (200.75s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.1s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:803: volcano-scheduler stabilized in 6.639791ms
addons_test.go:819: volcano-controller stabilized in 6.676291ms
addons_test.go:811: volcano-admission stabilized in 6.716375ms
addons_test.go:825: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-lh2dk" [92f689e6-79ec-44f6-8d16-2f1519449589] Running
addons_test.go:825: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.009244125s
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-nc9ll" [a32b7540-a6d3-41e6-9c7d-7a5bf3bfe1fc] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005541375s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-nkqjk" [d9a78a1c-31a9-412b-967d-c4d710a7644e] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004510708s
addons_test.go:838: (dbg) Run:  kubectl --context addons-943000 delete -n volcano-system job volcano-admission-init
addons_test.go:844: (dbg) Run:  kubectl --context addons-943000 create -f testdata/vcjob.yaml
addons_test.go:852: (dbg) Run:  kubectl --context addons-943000 get vcjob -n my-volcano
addons_test.go:870: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [cc9fd7f0-5659-4e60-a316-a6588e40b4cf] Pending
helpers_test.go:344: "test-job-nginx-0" [cc9fd7f0-5659-4e60-a316-a6588e40b4cf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [cc9fd7f0-5659-4e60-a316-a6588e40b4cf] Running
addons_test.go:870: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.008077s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable volcano --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-943000 addons disable volcano --alsologtostderr -v=1: (10.863527167s)
--- PASS: TestAddons/serial/Volcano (39.10s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-943000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-943000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (10.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-943000 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-943000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22bcb896-74d0-45ba-8009-7bcef2c5a3c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [22bcb896-74d0-45ba-8009-7bcef2c5a3c2] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 10.010364541s
addons_test.go:633: (dbg) Run:  kubectl --context addons-943000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-943000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-943000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-943000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (10.42s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 1.355083ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-z5jbt" [09c43efa-f1c9-476a-89fb-48803364fee2] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.011465167s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zfvm8" [07ed01a4-be5b-4574-b65e-83f480d728b1] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009139666s
addons_test.go:331: (dbg) Run:  kubectl --context addons-943000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-943000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-943000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.0534235s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 ip
2024/10/14 06:43:10 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.39s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-943000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-943000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-943000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9252e097-d7cb-4d3d-be5e-992638c692b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9252e097-d7cb-4d3d-be5e-992638c692b5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.008553917s
I1014 06:43:37.588697    1497 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-943000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable ingress --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-943000 addons disable ingress --alsologtostderr -v=1: (7.244660333s)
--- PASS: TestAddons/parallel/Ingress (18.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jzbmq" [4997af32-c084-44b9-a99b-2457fc81c6e1] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007132833s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-943000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.253220875s)
--- PASS: TestAddons/parallel/InspektorGadget (10.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.213042ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-z5t2d" [d37b4306-14c8-4393-ae6f-25b4d540d6c9] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007035958s
addons_test.go:402: (dbg) Run:  kubectl --context addons-943000 top pods -n kube-system
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (31.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1014 06:43:10.510732    1497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1014 06:43:10.514162    1497 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1014 06:43:10.514171    1497 kapi.go:107] duration metric: took 3.4865ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.489917ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-943000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-943000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ae613bdf-77fd-4264-a1b4-b53464c92828] Pending
helpers_test.go:344: "task-pv-pod" [ae613bdf-77fd-4264-a1b4-b53464c92828] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ae613bdf-77fd-4264-a1b4-b53464c92828] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.008893625s
addons_test.go:511: (dbg) Run:  kubectl --context addons-943000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-943000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-943000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-943000 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-943000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-943000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-943000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5416dc86-7d7a-438e-a76b-ec54115d91eb] Pending
helpers_test.go:344: "task-pv-pod-restore" [5416dc86-7d7a-438e-a76b-ec54115d91eb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5416dc86-7d7a-438e-a76b-ec54115d91eb] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.010350792s
addons_test.go:553: (dbg) Run:  kubectl --context addons-943000 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-943000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-943000 delete volumesnapshot new-snapshot-demo
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-943000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.164598625s)
--- PASS: TestAddons/parallel/CSI (31.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-943000 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-g92cl" [caa60bc6-6b18-4df4-912d-9add7adc54b5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-g92cl" [caa60bc6-6b18-4df4-912d-9add7adc54b5] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004393916s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-943000 addons disable headlamp --alsologtostderr -v=1: (5.259130084s)
--- PASS: TestAddons/parallel/Headlamp (16.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-mlwzt" [76820667-58b8-4331-a120-1acc3408c806] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007728875s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.87s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-943000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-943000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-943000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [471e6056-efe7-43c8-a61e-7f5b3a9cebdc] Pending
helpers_test.go:344: "test-local-path" [471e6056-efe7-43c8-a61e-7f5b3a9cebdc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [471e6056-efe7-43c8-a61e-7f5b3a9cebdc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [471e6056-efe7-43c8-a61e-7f5b3a9cebdc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00904275s
addons_test.go:902: (dbg) Run:  kubectl --context addons-943000 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 ssh "cat /opt/local-path-provisioner/pvc-920450e4-2a4a-4ca9-8c51-2bb073bcf354_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-943000 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-943000 delete pvc test-pvc
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-943000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.315329959s)
--- PASS: TestAddons/parallel/LocalPath (52.87s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-btgss" [c9918bc8-a2b2-4890-96b2-aecace890278] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.009306542s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-px4xr" [01f9cb2d-e727-4eb7-9075-e1f2132c065c] Running
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00652s
addons_test.go:988: (dbg) Run:  out/minikube-darwin-arm64 -p addons-943000 addons disable yakd --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-darwin-arm64 -p addons-943000 addons disable yakd --alsologtostderr -v=1: (5.285660709s)
--- PASS: TestAddons/parallel/Yakd (11.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-943000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-943000: (12.216680625s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-943000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-943000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-943000
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.5s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1014 07:49:10.331515    1497 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1014 07:49:10.331724    1497 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
--- PASS: TestHyperKitDriverInstallOrUpdate (11.50s)

                                                
                                    
x
+
TestErrorSpam/setup (36.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-098000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-098000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 --driver=qemu2 : (36.301153291s)
--- PASS: TestErrorSpam/setup (36.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 pause
--- PASS: TestErrorSpam/pause (0.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 unpause
--- PASS: TestErrorSpam/unpause (0.58s)

                                                
                                    
x
+
TestErrorSpam/stop (55.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 stop: (3.193196208s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 stop: (26.059000167s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-098000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-098000 stop: (26.038231667s)
--- PASS: TestErrorSpam/stop (55.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19790-979/.minikube/files/etc/test/nested/copy/1497/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-365000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E1014 06:46:57.193323    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:46:57.200944    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:46:57.214271    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:46:57.237596    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:46:57.280987    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:46:57.364338    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:46:57.527704    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:46:57.849920    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:46:58.493315    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:46:59.776669    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:47:02.338712    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:47:07.462409    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-365000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (48.210436667s)
--- PASS: TestFunctional/serial/StartWithProxy (48.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1014 06:47:14.027978    1497 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-365000 --alsologtostderr -v=8
E1014 06:47:17.705063    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
E1014 06:47:38.188758    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-365000 --alsologtostderr -v=8: (38.8887585s)
functional_test.go:663: soft start took 38.889149958s for "functional-365000" cluster.
I1014 06:47:52.916366    1497 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (38.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-365000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-365000 cache add registry.k8s.io/pause:3.1: (1.194322041s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-365000 cache add registry.k8s.io/pause:3.3: (1.115744917s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3040806227/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 cache add minikube-local-cache-test:functional-365000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-365000 cache add minikube-local-cache-test:functional-365000: (1.357161125s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 cache delete minikube-local-cache-test:functional-365000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-365000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-365000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (72.350833ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 kubectl -- --context functional-365000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-365000 kubectl -- --context functional-365000 get pods: (2.180992959s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-365000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-365000 get pods: (1.158825083s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-365000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1014 06:48:19.152120    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-365000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.865029875s)
functional_test.go:761: restart took 38.865121667s for "functional-365000" cluster.
I1014 06:48:40.971976    1497 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-365000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2832337850/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-365000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-365000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-365000: exit status 115 (148.786083ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31492 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-365000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-365000 config get cpus: exit status 14 (35.026541ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-365000 config get cpus: exit status 14 (36.536417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-365000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-365000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2091: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-365000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-365000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (152.884166ms)

                                                
                                                
-- stdout --
	* [functional-365000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 06:49:29.681479    2078 out.go:345] Setting OutFile to fd 1 ...
	I1014 06:49:29.681644    2078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:49:29.681648    2078 out.go:358] Setting ErrFile to fd 2...
	I1014 06:49:29.681650    2078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:49:29.681798    2078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 06:49:29.682889    2078 out.go:352] Setting JSON to false
	I1014 06:49:29.700930    2078 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1139,"bootTime":1728912630,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 06:49:29.701023    2078 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 06:49:29.705318    2078 out.go:177] * [functional-365000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1014 06:49:29.712373    2078 notify.go:220] Checking for updates...
	I1014 06:49:29.718291    2078 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 06:49:29.725318    2078 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 06:49:29.733260    2078 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 06:49:29.743252    2078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 06:49:29.753293    2078 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 06:49:29.760432    2078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 06:49:29.764609    2078 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 06:49:29.764878    2078 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 06:49:29.769338    2078 out.go:177] * Using the qemu2 driver based on existing profile
	I1014 06:49:29.776297    2078 start.go:297] selected driver: qemu2
	I1014 06:49:29.776305    2078 start.go:901] validating driver "qemu2" against &{Name:functional-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-365000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 06:49:29.776362    2078 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 06:49:29.782328    2078 out.go:201] 
	W1014 06:49:29.788291    2078 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1014 06:49:29.797298    2078 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-365000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-365000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-365000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (120.73375ms)

                                                
                                                
-- stdout --
	* [functional-365000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 06:49:29.555279    2074 out.go:345] Setting OutFile to fd 1 ...
	I1014 06:49:29.555426    2074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:49:29.555429    2074 out.go:358] Setting ErrFile to fd 2...
	I1014 06:49:29.555432    2074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:49:29.555569    2074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
	I1014 06:49:29.557090    2074 out.go:352] Setting JSON to false
	I1014 06:49:29.577190    2074 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1139,"bootTime":1728912630,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1014 06:49:29.577274    2074 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 06:49:29.583327    2074 out.go:177] * [functional-365000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1014 06:49:29.590301    2074 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 06:49:29.590370    2074 notify.go:220] Checking for updates...
	I1014 06:49:29.598261    2074 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	I1014 06:49:29.602288    2074 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1014 06:49:29.603541    2074 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 06:49:29.606299    2074 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	I1014 06:49:29.609325    2074 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 06:49:29.612676    2074 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 06:49:29.612927    2074 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 06:49:29.617325    2074 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1014 06:49:29.624304    2074 start.go:297] selected driver: qemu2
	I1014 06:49:29.624310    2074 start.go:901] validating driver "qemu2" against &{Name:functional-365000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-365000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 06:49:29.624364    2074 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 06:49:29.631346    2074 out.go:201] 
	W1014 06:49:29.635346    2074 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1014 06:49:29.639237    2074 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e3036f33-437b-4841-9b2b-7e65226b2499] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.011309416s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-365000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-365000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-365000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-365000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8e40283a-118b-4b57-9b54-6950f876f998] Pending
helpers_test.go:344: "sp-pod" [8e40283a-118b-4b57-9b54-6950f876f998] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8e40283a-118b-4b57-9b54-6950f876f998] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.0062005s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-365000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-365000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-365000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e2b6c5f5-b223-4171-933c-a33df52aaab0] Pending
helpers_test.go:344: "sp-pod" [e2b6c5f5-b223-4171-933c-a33df52aaab0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e2b6c5f5-b223-4171-933c-a33df52aaab0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.009049291s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-365000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh -n functional-365000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 cp functional-365000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1346291314/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh -n functional-365000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh -n functional-365000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1497/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "sudo cat /etc/test/nested/copy/1497/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1497.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "sudo cat /etc/ssl/certs/1497.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1497.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "sudo cat /usr/share/ca-certificates/1497.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "sudo cat /etc/ssl/certs/14972.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "sudo cat /usr/share/ca-certificates/14972.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-365000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-365000 ssh "sudo systemctl is-active crio": exit status 1 (144.801542ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-365000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-365000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-365000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-365000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1939: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-365000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-365000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1601aedf-f763-4aa6-8e5f-f8e9fcbea82c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1601aedf-f763-4aa6-8e5f-f8e9fcbea82c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.221839167s
I1014 06:48:58.078837    1497 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-365000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.185.131 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1014 06:48:58.152546    1497 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1014 06:48:58.195845    1497 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-365000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-365000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-365000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-89ltn" [389668eb-b96d-42fa-b946-cafcb83e1517] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-89ltn" [389668eb-b96d-42fa-b946-cafcb83e1517] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.01138325s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 service list -o json
functional_test.go:1494: Took "285.572542ms" to run "out/minikube-darwin-arm64 -p functional-365000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31777
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31777
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "99.719042ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "37.700708ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "99.18625ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "38.665833ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port511891368/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728913760928750000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port511891368/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728913760928750000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port511891368/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728913760928750000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port511891368/001/test-1728913760928750000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Done: out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T /mount-9p | grep 9p": (1.336158708s)
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 14 13:49 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 14 13:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 14 13:49 test-1728913760928750000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh cat /mount-9p/test-1728913760928750000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-365000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [35c45401-3349-4816-93d1-ae60dc798583] Pending
helpers_test.go:344: "busybox-mount" [35c45401-3349-4816-93d1-ae60dc798583] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [35c45401-3349-4816-93d1-ae60dc798583] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [35c45401-3349-4816-93d1-ae60dc798583] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.008591042s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-365000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port511891368/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4184654023/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (68.855083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 06:49:26.959323    1497 retry.go:31] will retry after 430.209012ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4184654023/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-365000 ssh "sudo umount -f /mount-9p": exit status 1 (63.135541ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-365000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4184654023/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2064539692/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2064539692/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2064539692/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T" /mount1: exit status 1 (77.791459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 06:49:27.945070    1497 retry.go:31] will retry after 344.590466ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T" /mount2: exit status 1 (57.472833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 06:49:28.435528    1497 retry.go:31] will retry after 597.020375ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-365000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2064539692/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2064539692/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-365000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2064539692/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-365000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-365000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-365000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-365000 image ls --format short --alsologtostderr:
I1014 06:49:38.966849    2229 out.go:345] Setting OutFile to fd 1 ...
I1014 06:49:38.967010    2229 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 06:49:38.967014    2229 out.go:358] Setting ErrFile to fd 2...
I1014 06:49:38.967016    2229 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 06:49:38.967146    2229 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
I1014 06:49:38.967595    2229 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 06:49:38.967653    2229 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 06:49:38.968424    2229 ssh_runner.go:195] Run: systemctl --version
I1014 06:49:38.968432    2229 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/functional-365000/id_rsa Username:docker}
I1014 06:49:38.993237    2229 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-365000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kicbase/echo-server               | functional-365000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-365000 | b41c3a46cfcf0 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/library/nginx                     | latest            | 048e090385966 | 197MB  |
| docker.io/library/nginx                     | alpine            | 577a23b5858b9 | 50.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-365000 image ls --format table --alsologtostderr:
I1014 06:49:39.194761    2239 out.go:345] Setting OutFile to fd 1 ...
I1014 06:49:39.194949    2239 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 06:49:39.194953    2239 out.go:358] Setting ErrFile to fd 2...
I1014 06:49:39.194955    2239 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 06:49:39.195090    2239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
I1014 06:49:39.195575    2239 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 06:49:39.195632    2239 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 06:49:39.196577    2239 ssh_runner.go:195] Run: systemctl --version
I1014 06:49:39.196585    2239 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/functional-365000/id_rsa Username:docker}
I1014 06:49:39.221131    2239 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-365000 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c999
51fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.
k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-365000"],"size":"4780000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"b41c3a46cfcf00affd9cda674094c59deff13e78c7401157e3502399ad8aa780","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-365000"],"size":"30"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"si
ze":"50800000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-365000 image ls --format json --alsologtostderr:
I1014 06:49:39.120408    2235 out.go:345] Setting OutFile to fd 1 ...
I1014 06:49:39.120576    2235 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 06:49:39.120579    2235 out.go:358] Setting ErrFile to fd 2...
I1014 06:49:39.120582    2235 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 06:49:39.120698    2235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
I1014 06:49:39.121096    2235 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 06:49:39.121155    2235 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 06:49:39.122011    2235 ssh_runner.go:195] Run: systemctl --version
I1014 06:49:39.122020    2235 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/functional-365000/id_rsa Username:docker}
I1014 06:49:39.145780    2235 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-365000 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "50800000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-365000
size: "4780000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: b41c3a46cfcf00affd9cda674094c59deff13e78c7401157e3502399ad8aa780
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-365000
size: "30"
- id: 048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-365000 image ls --format yaml --alsologtostderr:
I1014 06:49:39.043134    2232 out.go:345] Setting OutFile to fd 1 ...
I1014 06:49:39.043326    2232 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 06:49:39.043330    2232 out.go:358] Setting ErrFile to fd 2...
I1014 06:49:39.043332    2232 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 06:49:39.043480    2232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
I1014 06:49:39.044001    2232 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 06:49:39.044065    2232 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 06:49:39.045033    2232 ssh_runner.go:195] Run: systemctl --version
I1014 06:49:39.045043    2232 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/functional-365000/id_rsa Username:docker}
I1014 06:49:39.069673    2232 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-365000 ssh pgrep buildkitd: exit status 1 (62.95125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image build -t localhost/my-image:functional-365000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-365000 image build -t localhost/my-image:functional-365000 testdata/build --alsologtostderr: (1.761935959s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-365000 image build -t localhost/my-image:functional-365000 testdata/build --alsologtostderr:
I1014 06:49:39.178387    2238 out.go:345] Setting OutFile to fd 1 ...
I1014 06:49:39.178715    2238 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 06:49:39.178718    2238 out.go:358] Setting ErrFile to fd 2...
I1014 06:49:39.178721    2238 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 06:49:39.178866    2238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19790-979/.minikube/bin
I1014 06:49:39.179322    2238 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 06:49:39.180134    2238 config.go:182] Loaded profile config "functional-365000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 06:49:39.181151    2238 ssh_runner.go:195] Run: systemctl --version
I1014 06:49:39.181166    2238 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19790-979/.minikube/machines/functional-365000/id_rsa Username:docker}
I1014 06:49:39.205652    2238 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1566220734.tar
I1014 06:49:39.205734    2238 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1014 06:49:39.209400    2238 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1566220734.tar
I1014 06:49:39.211113    2238 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1566220734.tar: stat -c "%s %y" /var/lib/minikube/build/build.1566220734.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1566220734.tar': No such file or directory
I1014 06:49:39.211124    2238 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1566220734.tar --> /var/lib/minikube/build/build.1566220734.tar (3072 bytes)
I1014 06:49:39.220029    2238 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1566220734
I1014 06:49:39.224990    2238 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1566220734 -xf /var/lib/minikube/build/build.1566220734.tar
I1014 06:49:39.229517    2238 docker.go:360] Building image: /var/lib/minikube/build/build.1566220734
I1014 06:49:39.229584    2238 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-365000 /var/lib/minikube/build/build.1566220734
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:50fc18ad4cdc9342675a35e70fec2cc1137269da4d472e0d75db7d7bf86b9ae4 done
#8 naming to localhost/my-image:functional-365000 done
#8 DONE 0.0s
I1014 06:49:40.850612    2238 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-365000 /var/lib/minikube/build/build.1566220734: (1.621029834s)
I1014 06:49:40.850702    2238 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1566220734
I1014 06:49:40.855023    2238 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1566220734.tar
I1014 06:49:40.858285    2238 build_images.go:217] Built localhost/my-image:functional-365000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1566220734.tar
I1014 06:49:40.858301    2238 build_images.go:133] succeeded building to: functional-365000
I1014 06:49:40.858304    2238 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.693329209s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-365000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image load --daemon kicbase/echo-server:functional-365000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image load --daemon kicbase/echo-server:functional-365000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-365000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image load --daemon kicbase/echo-server:functional-365000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image save kicbase/echo-server:functional-365000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image rm kicbase/echo-server:functional-365000 --alsologtostderr
2024/10/14 06:49:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-365000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 image save --daemon kicbase/echo-server:functional-365000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-365000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-365000 docker-env) && out/minikube-darwin-arm64 status -p functional-365000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-365000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-365000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-365000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-365000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-365000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-063000 status --output json -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-438000 --driver=qemu2 
E1014 07:20:00.291849    1497 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19790-979/.minikube/profiles/addons-943000/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-438000 --driver=qemu2 : (34.574426209s)
--- PASS: TestImageBuild/serial/Setup (34.57s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-438000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-438000: (1.801530209s)
--- PASS: TestImageBuild/serial/NormalBuild (1.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-438000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-438000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-438000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (4.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-467000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-467000 --output=json --user=testUser: (4.900908042s)
--- PASS: TestJSONOutput/stop/Command (4.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-878000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-878000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (101.963542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"217e47ac-46a0-4020-b311-c1d60e385c2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-878000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e626b8b-1541-4018-894c-f6bcbb135bf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19790"}}
	{"specversion":"1.0","id":"b7969110-4d5b-43ef-a0da-a34ab5a4531e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig"}}
	{"specversion":"1.0","id":"c2aa65f9-7d2c-4bdf-8eaf-3fa737ba0d50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"16941a8b-4814-4d4b-980d-5c2c96aa6ed9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b0ffc6ef-312b-4e59-a358-986cb46c3892","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube"}}
	{"specversion":"1.0","id":"71ee185a-e32b-478a-884c-b78d0829e618","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"35671d88-917a-4381-93ae-6089dc9c9cc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-878000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-878000
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (70.74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-752000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-752000 --driver=qemu2 : (34.898110083s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-754000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-754000 --driver=qemu2 : (35.143799709s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-752000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-754000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-754000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-754000
helpers_test.go:175: Cleaning up "first-752000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-752000
--- PASS: TestMinikubeProfile (70.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-496000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-500000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-500000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (105.001167ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19790-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19790-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-500000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-500000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.977833ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-500000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-500000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.6126535s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-500000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-500000: (2.010187875s)
--- PASS: TestNoKubernetes/serial/Stop (2.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-500000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-500000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (48.129292ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-500000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-500000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-554000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-554000 --alsologtostderr -v=3: (3.583386125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-554000 -n old-k8s-version-554000: exit status 7 (64.363583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-554000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-029000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-029000 --alsologtostderr -v=3: (2.031108208s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-029000 -n no-preload-029000: exit status 7 (61.541084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-029000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-921000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-921000 --alsologtostderr -v=3: (3.383617042s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-921000 -n embed-certs-921000: exit status 7 (61.091167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-921000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-328000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-328000 --alsologtostderr -v=3: (3.161430042s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-328000 -n default-k8s-diff-port-328000: exit status 7 (61.317167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-328000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-831000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-831000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-831000 --alsologtostderr -v=3: (3.44541225s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-831000 -n newest-cni-831000: exit status 7 (60.54225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-831000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:968: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-513000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-513000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-513000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-513000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513000"

                                                
                                                
----------------------- debugLogs end: cilium-513000 [took: 2.319718917s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-513000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-513000
--- SKIP: TestNetworkPlugins/group/cilium (2.43s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-329000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-329000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard